Dec 05 10:27:32 crc systemd[1]: Starting Kubernetes Kubelet... Dec 05 10:27:32 crc restorecon[4558]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 10:27:32 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 10:27:33 crc restorecon[4558]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 05 10:27:33 crc restorecon[4558]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 05 10:27:33 crc kubenswrapper[4796]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 05 10:27:33 crc kubenswrapper[4796]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 05 10:27:33 crc kubenswrapper[4796]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 05 10:27:33 crc kubenswrapper[4796]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 05 10:27:33 crc kubenswrapper[4796]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 05 10:27:33 crc kubenswrapper[4796]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.891510 4796 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893696 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893712 4796 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893717 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893721 4796 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893725 4796 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893730 4796 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893735 4796 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893748 4796 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893752 4796 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893756 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893760 4796 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893765 4796 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893769 4796 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893772 4796 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893775 4796 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893779 4796 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893782 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893785 4796 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893798 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893802 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893805 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893809 4796 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893813 4796 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893816 4796 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893819 4796 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893823 4796 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893827 4796 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893830 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893834 4796 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893838 4796 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893843 4796 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893847 4796 feature_gate.go:330] unrecognized feature gate: Example Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893851 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893856 4796 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893861 4796 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893865 4796 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893869 4796 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893872 4796 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893876 4796 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893879 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893882 4796 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893886 4796 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893889 4796 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893892 4796 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893896 4796 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893900 4796 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893903 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893906 4796 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893909 4796 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893912 4796 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893916 4796 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893919 4796 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893922 4796 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893926 4796 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893929 4796 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893933 4796 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893936 4796 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893940 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893944 4796 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893948 4796 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893951 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893955 4796 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893958 4796 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893962 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893965 4796 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893969 4796 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893972 4796 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893976 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893980 4796 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893983 4796 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.893986 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894061 4796 flags.go:64] FLAG: --address="0.0.0.0" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894070 4796 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894077 4796 flags.go:64] FLAG: --anonymous-auth="true" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894082 4796 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894087 4796 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894091 4796 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894095 4796 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894100 4796 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894104 4796 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894108 4796 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894112 4796 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894117 4796 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894121 4796 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894125 4796 flags.go:64] FLAG: --cgroup-root="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894129 4796 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894133 4796 flags.go:64] FLAG: --client-ca-file="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894137 4796 flags.go:64] FLAG: --cloud-config="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894141 4796 flags.go:64] FLAG: --cloud-provider="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894145 4796 flags.go:64] FLAG: --cluster-dns="[]" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894152 4796 flags.go:64] FLAG: --cluster-domain="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894156 4796 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894160 4796 flags.go:64] FLAG: --config-dir="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894164 4796 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894169 4796 flags.go:64] FLAG: --container-log-max-files="5" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894174 4796 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894177 4796 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894181 4796 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894185 4796 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894189 4796 flags.go:64] FLAG: --contention-profiling="false" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894193 4796 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894198 4796 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894202 4796 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894207 4796 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894212 4796 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894216 4796 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894220 4796 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894224 4796 flags.go:64] FLAG: --enable-load-reader="false" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894228 4796 flags.go:64] FLAG: --enable-server="true" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894232 4796 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894240 4796 flags.go:64] FLAG: --event-burst="100" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894244 4796 flags.go:64] FLAG: --event-qps="50" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894248 4796 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894252 4796 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894256 4796 flags.go:64] FLAG: --eviction-hard="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894261 4796 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894265 4796 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894268 4796 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894272 4796 flags.go:64] FLAG: --eviction-soft="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894276 4796 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894280 4796 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894284 4796 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894290 4796 flags.go:64] FLAG: --experimental-mounter-path="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894295 4796 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894298 4796 flags.go:64] FLAG: --fail-swap-on="true" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894302 4796 flags.go:64] FLAG: --feature-gates="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894306 4796 flags.go:64] FLAG: --file-check-frequency="20s" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894310 4796 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894314 4796 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894317 4796 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894321 4796 flags.go:64] FLAG: --healthz-port="10248" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894325 4796 flags.go:64] FLAG: --help="false" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894329 4796 flags.go:64] FLAG: --hostname-override="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894332 4796 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894336 4796 flags.go:64] FLAG: --http-check-frequency="20s" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894340 4796 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894344 4796 flags.go:64] FLAG: --image-credential-provider-config="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894348 4796 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894352 4796 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894356 4796 flags.go:64] FLAG: --image-service-endpoint="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894360 4796 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894364 4796 flags.go:64] FLAG: --kube-api-burst="100" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894368 4796 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894372 4796 flags.go:64] FLAG: --kube-api-qps="50" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894375 4796 flags.go:64] FLAG: --kube-reserved="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894379 4796 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894382 4796 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894386 4796 flags.go:64] FLAG: --kubelet-cgroups="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894389 4796 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894393 4796 flags.go:64] FLAG: --lock-file="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894397 4796 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894401 4796 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894404 4796 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894410 4796 flags.go:64] FLAG: --log-json-split-stream="false" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894416 4796 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894420 4796 flags.go:64] FLAG: --log-text-split-stream="false" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894424 4796 flags.go:64] FLAG: --logging-format="text" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894427 4796 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894432 4796 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894436 4796 flags.go:64] FLAG: --manifest-url="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894439 4796 flags.go:64] FLAG: --manifest-url-header="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894444 4796 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894448 4796 flags.go:64] FLAG: --max-open-files="1000000" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894453 4796 flags.go:64] FLAG: --max-pods="110" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894458 4796 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894462 4796 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894467 4796 flags.go:64] FLAG: --memory-manager-policy="None" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894470 4796 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894475 4796 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894479 4796 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894483 4796 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894491 4796 flags.go:64] FLAG: --node-status-max-images="50" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894496 4796 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894500 4796 flags.go:64] FLAG: --oom-score-adj="-999" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894504 4796 flags.go:64] FLAG: --pod-cidr="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894510 4796 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894516 4796 flags.go:64] FLAG: --pod-manifest-path="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894520 4796 flags.go:64] FLAG: --pod-max-pids="-1" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894525 4796 flags.go:64] FLAG: --pods-per-core="0" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894529 4796 flags.go:64] FLAG: --port="10250" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894532 4796 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894536 4796 flags.go:64] FLAG: --provider-id="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894541 4796 flags.go:64] FLAG: --qos-reserved="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894545 4796 flags.go:64] FLAG: --read-only-port="10255" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894548 4796 flags.go:64] FLAG: --register-node="true" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894552 4796 flags.go:64] FLAG: --register-schedulable="true" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894557 4796 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894563 4796 flags.go:64] FLAG: --registry-burst="10" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894567 4796 flags.go:64] FLAG: --registry-qps="5" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894571 4796 flags.go:64] FLAG: --reserved-cpus="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894575 4796 flags.go:64] FLAG: --reserved-memory="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894580 4796 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894584 4796 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894588 4796 flags.go:64] FLAG: --rotate-certificates="false" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894592 4796 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894596 4796 flags.go:64] FLAG: --runonce="false" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894599 4796 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894603 4796 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894607 4796 flags.go:64] FLAG: --seccomp-default="false" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894611 4796 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894614 4796 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894618 4796 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894622 4796 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894626 4796 flags.go:64] FLAG: --storage-driver-password="root" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894629 4796 flags.go:64] FLAG: --storage-driver-secure="false" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894633 4796 flags.go:64] FLAG: --storage-driver-table="stats" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894637 4796 flags.go:64] FLAG: --storage-driver-user="root" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894641 4796 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894644 4796 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894648 4796 flags.go:64] FLAG: --system-cgroups="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894652 4796 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894658 4796 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894661 4796 flags.go:64] FLAG: --tls-cert-file="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894665 4796 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894669 4796 flags.go:64] FLAG: --tls-min-version="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894673 4796 flags.go:64] FLAG: --tls-private-key-file="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894676 4796 flags.go:64] FLAG: --topology-manager-policy="none" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894695 4796 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894700 4796 flags.go:64] FLAG: --topology-manager-scope="container" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894704 4796 flags.go:64] FLAG: --v="2" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894709 4796 flags.go:64] FLAG: --version="false" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894714 4796 flags.go:64] FLAG: --vmodule="" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894718 4796 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.894722 4796 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894820 4796 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894825 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894828 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894832 4796 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894836 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894839 4796 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894843 4796 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894846 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894849 4796 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894852 4796 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894857 4796 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894860 4796 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894865 4796 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894868 4796 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894871 4796 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894874 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894878 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894881 4796 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894884 4796 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894887 4796 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894890 4796 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894893 4796 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894897 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894901 4796 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894904 4796 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894908 4796 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894911 4796 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894914 4796 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894917 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894920 4796 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894923 4796 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894926 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894930 4796 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894933 4796 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894937 4796 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894940 4796 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894943 4796 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894947 4796 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894952 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894955 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894959 4796 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894962 4796 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894966 4796 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894970 4796 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894975 4796 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894979 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894983 4796 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894986 4796 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894990 4796 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894994 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.894997 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.895001 4796 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.895004 4796 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.895007 4796 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.895010 4796 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.895014 4796 feature_gate.go:330] unrecognized feature gate: Example Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.895017 4796 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.895020 4796 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.895027 4796 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.895030 4796 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.895035 4796 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.895039 4796 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.895042 4796 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.895046 4796 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.895049 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.895052 4796 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.895055 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.895058 4796 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.895061 4796 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.895065 4796 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.895068 4796 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.895079 4796 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.902459 4796 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.902526 4796 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902609 4796 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902627 4796 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902633 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902636 4796 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902641 4796 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902645 4796 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902651 4796 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902656 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902660 4796 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902665 4796 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902670 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902674 4796 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902678 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902693 4796 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902698 4796 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902701 4796 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902705 4796 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902708 4796 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902712 4796 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902715 4796 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902719 4796 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902722 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902726 4796 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902730 4796 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902733 4796 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902747 4796 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902751 4796 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902755 4796 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902759 4796 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902762 4796 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902767 4796 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902772 4796 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902777 4796 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902784 4796 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902791 4796 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902795 4796 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902800 4796 feature_gate.go:330] unrecognized feature gate: Example Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902804 4796 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902809 4796 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902816 4796 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902821 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902827 4796 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902832 4796 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902837 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902842 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902847 4796 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902852 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902856 4796 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902860 4796 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902864 4796 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902867 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902871 4796 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902875 4796 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902879 4796 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902883 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902886 4796 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902890 4796 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902894 4796 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902898 4796 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902903 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902907 4796 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902911 4796 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902914 4796 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902918 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902922 4796 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902925 4796 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902929 4796 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902932 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902936 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902939 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.902943 4796 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.902952 4796 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903088 4796 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903108 4796 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903113 4796 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903120 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903124 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903128 4796 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903132 4796 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903136 4796 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903141 4796 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903145 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903149 4796 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903152 4796 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903156 4796 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903161 4796 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903165 4796 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903169 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903172 4796 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903176 4796 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903180 4796 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903183 4796 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903188 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903192 4796 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903196 4796 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903200 4796 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903205 4796 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903209 4796 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903214 4796 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903218 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903222 4796 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903226 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903230 4796 feature_gate.go:330] unrecognized feature gate: Example Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903234 4796 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903238 4796 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903242 4796 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903246 4796 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903250 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903254 4796 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903258 4796 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903262 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903265 4796 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903269 4796 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903272 4796 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903276 4796 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903279 4796 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903284 4796 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903287 4796 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903291 4796 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903294 4796 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903298 4796 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903301 4796 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903305 4796 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903308 4796 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903314 4796 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903318 4796 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903322 4796 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903326 4796 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903329 4796 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903332 4796 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903336 4796 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903339 4796 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903342 4796 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903346 4796 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903349 4796 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903352 4796 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903356 4796 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903359 4796 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903362 4796 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903365 4796 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903369 4796 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903373 4796 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.903377 4796 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.903384 4796 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.904074 4796 server.go:940] "Client rotation is on, will bootstrap in background" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.907367 4796 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.907475 4796 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.908593 4796 server.go:997] "Starting client certificate rotation" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.908620 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.908877 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-13 11:36:17.870567047 +0000 UTC Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.908999 4796 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 193h8m43.961572298s for next certificate rotation Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.922751 4796 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.924561 4796 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.935647 4796 log.go:25] "Validated CRI v1 runtime API" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.953431 4796 log.go:25] "Validated CRI v1 image API" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.955536 4796 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.958725 4796 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-05-10-24-20-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.958761 4796 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:49 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm:{mountpoint:/var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:50 fsType:tmpfs blockSize:0} overlay_0-43:{mountpoint:/var/lib/containers/storage/overlay/94b752e0a51c0134b00ddef6dc7a933a9d7c1d9bdc88a18dae4192a0d557d623/merged major:0 minor:43 fsType:overlay blockSize:0}] Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.972211 4796 manager.go:217] Machine: {Timestamp:2025-12-05 10:27:33.970475826 +0000 UTC m=+0.258581359 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2445404 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:bd1699b9-79b9-439b-a76e-17d5109bc482 BootID:461f0aa3-0c3e-46e8-8138-7f8b2360aec8 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:50 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:49 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/containers/storage/overlay-containers/75d81934760b26101869fbd8e4b5954c62b019c1cc3e5a0c9f82ed8de46b3b22/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:65536000 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-43 DeviceMajor:0 DeviceMinor:43 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:2a:cb:dd Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:enp3s0 MacAddress:fa:16:3e:2a:cb:dd Speed:-1 Mtu:1500} {Name:enp7s0 MacAddress:fa:16:3e:00:23:d1 Speed:-1 Mtu:1440} {Name:enp7s0.20 MacAddress:52:54:00:be:a6:48 Speed:-1 Mtu:1436} {Name:enp7s0.21 MacAddress:52:54:00:f8:f7:66 Speed:-1 Mtu:1436} {Name:enp7s0.22 MacAddress:52:54:00:7a:c8:ec Speed:-1 Mtu:1436} {Name:eth10 MacAddress:2e:be:2e:67:8c:a3 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:56:08:1c:a1:9f:75 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:65536 Type:Data Level:1} {Id:0 Size:65536 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:65536 Type:Data Level:1} {Id:1 Size:65536 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:65536 Type:Data Level:1} {Id:10 Size:65536 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:65536 Type:Data Level:1} {Id:11 Size:65536 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:65536 Type:Data Level:1} {Id:2 Size:65536 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:65536 Type:Data Level:1} {Id:3 Size:65536 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:65536 Type:Data Level:1} {Id:4 Size:65536 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:65536 Type:Data Level:1} {Id:5 Size:65536 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:65536 Type:Data Level:1} {Id:6 Size:65536 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:65536 Type:Data Level:1} {Id:7 Size:65536 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:65536 Type:Data Level:1} {Id:8 Size:65536 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:65536 Type:Data Level:1} {Id:9 Size:65536 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.972407 4796 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.972539 4796 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.975421 4796 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.976027 4796 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.976081 4796 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.976349 4796 topology_manager.go:138] "Creating topology manager with none policy" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.976364 4796 container_manager_linux.go:303] "Creating device plugin manager" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.976742 4796 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.976788 4796 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.976980 4796 state_mem.go:36] "Initialized new in-memory state store" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.977088 4796 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.978524 4796 kubelet.go:418] "Attempting to sync node with API server" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.978554 4796 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.978588 4796 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.978603 4796 kubelet.go:324] "Adding apiserver pod source" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.978617 4796 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.980802 4796 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.981373 4796 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.982636 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.25.20:6443: connect: connection refused Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.982704 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.25.20:6443: connect: connection refused Dec 05 10:27:33 crc kubenswrapper[4796]: E1205 10:27:33.982726 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 192.168.25.20:6443: connect: connection refused" logger="UnhandledError" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.982786 4796 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 05 10:27:33 crc kubenswrapper[4796]: E1205 10:27:33.982792 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.25.20:6443: connect: connection refused" logger="UnhandledError" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.984168 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.984195 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.984206 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.984214 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.984230 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.984239 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.984249 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.984262 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.984275 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.984287 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.984300 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.984309 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.985293 4796 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.985853 4796 server.go:1280] "Started kubelet" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.986315 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 192.168.25.20:6443: connect: connection refused Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.986470 4796 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.986469 4796 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 05 10:27:33 crc systemd[1]: Started Kubernetes Kubelet. Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.988154 4796 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.988669 4796 server.go:460] "Adding debug handlers to kubelet server" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.989032 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.989128 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 16:35:39.679485105 +0000 UTC Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.989172 4796 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 222h8m5.690315609s for next certificate rotation Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.991809 4796 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.992138 4796 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.992164 4796 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.992357 4796 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 05 10:27:33 crc kubenswrapper[4796]: E1205 10:27:33.992380 4796 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 05 10:27:33 crc kubenswrapper[4796]: E1205 10:27:33.992889 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.20:6443: connect: connection refused" interval="200ms" Dec 05 10:27:33 crc kubenswrapper[4796]: W1205 10:27:33.994228 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.25.20:6443: connect: connection refused Dec 05 10:27:33 crc kubenswrapper[4796]: E1205 10:27:33.994295 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.25.20:6443: connect: connection refused" logger="UnhandledError" Dec 05 10:27:33 crc kubenswrapper[4796]: E1205 10:27:33.988630 4796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 192.168.25.20:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187e4ae59b17f025 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-05 10:27:33.985816613 +0000 UTC m=+0.273922127,LastTimestamp:2025-12-05 10:27:33.985816613 +0000 UTC m=+0.273922127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.995111 4796 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.995376 4796 factory.go:55] Registering systemd factory Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.995398 4796 factory.go:221] Registration of the systemd container factory successfully Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.995737 4796 factory.go:153] Registering CRI-O factory Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.995755 4796 factory.go:221] Registration of the crio container factory successfully Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.995775 4796 factory.go:103] Registering Raw factory Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.995793 4796 manager.go:1196] Started watching for new ooms in manager Dec 05 10:27:33 crc kubenswrapper[4796]: I1205 10:27:33.997193 4796 manager.go:319] Starting recovery of all containers Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000636 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000708 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000721 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000742 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000751 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000761 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000771 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000780 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000792 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000801 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000810 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000823 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000832 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000842 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000850 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000861 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000869 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000879 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000887 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000896 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000905 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000914 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000921 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000931 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000939 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000948 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000959 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000983 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.000993 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001003 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001029 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001042 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001053 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001067 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001077 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001099 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001107 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001115 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001124 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001133 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001142 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001152 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001162 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001171 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001181 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001190 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001200 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001209 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001220 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001229 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001238 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001247 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001262 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001273 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001282 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001292 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001300 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001310 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001334 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001344 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001353 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.001364 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.002801 4796 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.002836 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.002854 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.002867 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.002877 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.002886 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.002896 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.002906 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.002919 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.002930 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.002939 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.002948 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.002958 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.002968 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.002977 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.002985 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.002995 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003003 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003011 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003021 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003030 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003038 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003047 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003055 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003063 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003073 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003083 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003093 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003103 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003113 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003121 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003131 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003149 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003158 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003167 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003175 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003184 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003194 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003204 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003214 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003223 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003233 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003242 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003257 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003268 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003278 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003288 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003298 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003309 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003319 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003330 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003340 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003350 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003360 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003371 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003379 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003388 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003397 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003407 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003418 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003427 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003438 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003448 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003458 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003467 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003478 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003489 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003501 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003511 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003522 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003532 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003540 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003549 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003557 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003565 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003573 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003582 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003591 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003599 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003608 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003617 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003626 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003635 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003644 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003655 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003663 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003737 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003987 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.003999 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004008 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004017 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004031 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004040 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004049 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004057 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004066 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004075 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004084 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004095 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004106 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004115 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004126 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004136 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004144 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004153 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004161 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004169 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004177 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004186 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004196 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004205 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004213 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004223 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004235 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004244 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004252 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004262 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004270 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004279 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004288 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004296 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004306 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004315 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004326 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004334 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004343 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004359 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004369 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004377 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004385 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004394 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004403 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004412 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004421 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004428 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004437 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004446 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004455 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004464 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004473 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004481 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004490 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004498 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004507 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004518 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004529 4796 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004537 4796 reconstruct.go:97] "Volume reconstruction finished" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.004545 4796 reconciler.go:26] "Reconciler: start to sync state" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.013443 4796 manager.go:324] Recovery completed Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.022898 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.025019 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.025077 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.025092 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.026711 4796 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.026740 4796 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.026759 4796 state_mem.go:36] "Initialized new in-memory state store" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.027907 4796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.029420 4796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.029546 4796 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.029755 4796 kubelet.go:2335] "Starting kubelet main sync loop" Dec 05 10:27:34 crc kubenswrapper[4796]: E1205 10:27:34.029958 4796 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 05 10:27:34 crc kubenswrapper[4796]: W1205 10:27:34.030808 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.25.20:6443: connect: connection refused Dec 05 10:27:34 crc kubenswrapper[4796]: E1205 10:27:34.030867 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.25.20:6443: connect: connection refused" logger="UnhandledError" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.033156 4796 policy_none.go:49] "None policy: Start" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.034253 4796 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.034285 4796 state_mem.go:35] "Initializing new in-memory state store" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.085069 4796 manager.go:334] "Starting Device Plugin manager" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.085106 4796 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.085119 4796 server.go:79] "Starting device plugin registration server" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.085895 4796 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.085917 4796 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.086337 4796 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.086431 4796 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.086446 4796 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 05 10:27:34 crc kubenswrapper[4796]: E1205 10:27:34.092175 4796 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.130534 4796 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.130603 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.131433 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.131461 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.131472 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.131583 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.131780 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.131827 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.132194 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.132220 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.132231 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.132324 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.132478 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.132522 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.132976 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.133005 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.133017 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.133144 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.133177 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.133192 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.133202 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.133280 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.133302 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.133651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.133698 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.133724 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.133755 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.133769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.133778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.133876 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.133898 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.133920 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.133930 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.133944 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.133968 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.134511 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.134529 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.134543 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.134553 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.134530 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.134616 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.134791 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.134822 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.135355 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.135383 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.135394 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.186958 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.187650 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.187731 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.187745 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.187788 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 10:27:34 crc kubenswrapper[4796]: E1205 10:27:34.188410 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.25.20:6443: connect: connection refused" node="crc" Dec 05 10:27:34 crc kubenswrapper[4796]: E1205 10:27:34.193733 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.20:6443: connect: connection refused" interval="400ms" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.206258 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.206294 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.206319 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.206340 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.206408 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.206447 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.206520 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.206570 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.206592 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.206613 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.206630 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.206646 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.206713 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.206751 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.206786 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.307790 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.307840 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.307858 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.307877 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.307892 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.307907 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308006 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308042 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308062 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308072 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308094 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308110 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308111 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308160 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308195 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308199 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308218 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308231 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308269 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308275 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308303 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308332 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308376 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308395 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308426 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308435 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308467 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308473 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308533 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.308577 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: E1205 10:27:34.326383 4796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 192.168.25.20:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187e4ae59b17f025 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-05 10:27:33.985816613 +0000 UTC m=+0.273922127,LastTimestamp:2025-12-05 10:27:33.985816613 +0000 UTC m=+0.273922127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.389510 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.390723 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.390756 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.390766 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.390789 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 10:27:34 crc kubenswrapper[4796]: E1205 10:27:34.391133 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.25.20:6443: connect: connection refused" node="crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.468385 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.474509 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.496711 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: W1205 10:27:34.497875 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-8751b9189680d88c3598bf52494af8ece29649101d2716bc252cebb0371d85dc WatchSource:0}: Error finding container 8751b9189680d88c3598bf52494af8ece29649101d2716bc252cebb0371d85dc: Status 404 returned error can't find the container with id 8751b9189680d88c3598bf52494af8ece29649101d2716bc252cebb0371d85dc Dec 05 10:27:34 crc kubenswrapper[4796]: W1205 10:27:34.500522 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-7b4b71c5c98fd68444a2c9edec2468588b18b10792c3c51daea64204c6499882 WatchSource:0}: Error finding container 7b4b71c5c98fd68444a2c9edec2468588b18b10792c3c51daea64204c6499882: Status 404 returned error can't find the container with id 7b4b71c5c98fd68444a2c9edec2468588b18b10792c3c51daea64204c6499882 Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.502555 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.506169 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 10:27:34 crc kubenswrapper[4796]: W1205 10:27:34.514807 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-3ed0313979963961430c7879df7e56d3b4587f0ec0ee11e57233a5dc98e68f0b WatchSource:0}: Error finding container 3ed0313979963961430c7879df7e56d3b4587f0ec0ee11e57233a5dc98e68f0b: Status 404 returned error can't find the container with id 3ed0313979963961430c7879df7e56d3b4587f0ec0ee11e57233a5dc98e68f0b Dec 05 10:27:34 crc kubenswrapper[4796]: W1205 10:27:34.515754 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-f6e60d54732bcba19ec5ee37b61b3218a03bb84605b0cb5651d56f9c6112c613 WatchSource:0}: Error finding container f6e60d54732bcba19ec5ee37b61b3218a03bb84605b0cb5651d56f9c6112c613: Status 404 returned error can't find the container with id f6e60d54732bcba19ec5ee37b61b3218a03bb84605b0cb5651d56f9c6112c613 Dec 05 10:27:34 crc kubenswrapper[4796]: W1205 10:27:34.516044 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-771ceaf639b4e750017632c0776f77bb0b3cb26152e284e7224a65513fca08e6 WatchSource:0}: Error finding container 771ceaf639b4e750017632c0776f77bb0b3cb26152e284e7224a65513fca08e6: Status 404 returned error can't find the container with id 771ceaf639b4e750017632c0776f77bb0b3cb26152e284e7224a65513fca08e6 Dec 05 10:27:34 crc kubenswrapper[4796]: E1205 10:27:34.594275 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.20:6443: connect: connection refused" interval="800ms" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.791526 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.793359 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.793407 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.793420 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.793461 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 10:27:34 crc kubenswrapper[4796]: E1205 10:27:34.793908 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.25.20:6443: connect: connection refused" node="crc" Dec 05 10:27:34 crc kubenswrapper[4796]: I1205 10:27:34.987030 4796 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 192.168.25.20:6443: connect: connection refused Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.035210 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9"} Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.035294 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f6e60d54732bcba19ec5ee37b61b3218a03bb84605b0cb5651d56f9c6112c613"} Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.036831 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0" exitCode=0 Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.036921 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0"} Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.036963 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3ed0313979963961430c7879df7e56d3b4587f0ec0ee11e57233a5dc98e68f0b"} Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.037051 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.037699 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.037726 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.037737 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.038252 4796 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e554379579217035feeceb2e5573308297b4dfa8d3d1436369be3b12c65c39ca" exitCode=0 Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.038298 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e554379579217035feeceb2e5573308297b4dfa8d3d1436369be3b12c65c39ca"} Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.038315 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7b4b71c5c98fd68444a2c9edec2468588b18b10792c3c51daea64204c6499882"} Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.038360 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.039276 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.039319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.039336 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.039366 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.040780 4796 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3" exitCode=0 Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.040891 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3"} Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.040923 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8751b9189680d88c3598bf52494af8ece29649101d2716bc252cebb0371d85dc"} Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.041487 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.041597 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.041622 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.041638 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.042235 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.042259 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.042269 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.043931 4796 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="6a47fe81995daaec026a3c545f1c1a7de433b518c6e17cd5e47f8b6c44a55ded" exitCode=0 Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.043952 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"6a47fe81995daaec026a3c545f1c1a7de433b518c6e17cd5e47f8b6c44a55ded"} Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.044047 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"771ceaf639b4e750017632c0776f77bb0b3cb26152e284e7224a65513fca08e6"} Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.044144 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.044746 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.044775 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.044785 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:35 crc kubenswrapper[4796]: W1205 10:27:35.157121 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.25.20:6443: connect: connection refused Dec 05 10:27:35 crc kubenswrapper[4796]: E1205 10:27:35.157281 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.25.20:6443: connect: connection refused" logger="UnhandledError" Dec 05 10:27:35 crc kubenswrapper[4796]: E1205 10:27:35.395927 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.20:6443: connect: connection refused" interval="1.6s" Dec 05 10:27:35 crc kubenswrapper[4796]: W1205 10:27:35.501931 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 192.168.25.20:6443: connect: connection refused Dec 05 10:27:35 crc kubenswrapper[4796]: E1205 10:27:35.502026 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 192.168.25.20:6443: connect: connection refused" logger="UnhandledError" Dec 05 10:27:35 crc kubenswrapper[4796]: W1205 10:27:35.567793 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.25.20:6443: connect: connection refused Dec 05 10:27:35 crc kubenswrapper[4796]: E1205 10:27:35.567886 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.25.20:6443: connect: connection refused" logger="UnhandledError" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.594363 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.598044 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.598080 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.598090 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:35 crc kubenswrapper[4796]: I1205 10:27:35.598112 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 10:27:35 crc kubenswrapper[4796]: E1205 10:27:35.598575 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 192.168.25.20:6443: connect: connection refused" node="crc" Dec 05 10:27:35 crc kubenswrapper[4796]: W1205 10:27:35.629502 4796 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.25.20:6443: connect: connection refused Dec 05 10:27:35 crc kubenswrapper[4796]: E1205 10:27:35.629554 4796 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.25.20:6443: connect: connection refused" logger="UnhandledError" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.049245 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a445a8402d0df1de83bf6eae592b3fec8ffd202b76dc5e093b3d5a06b9e38f51"} Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.049443 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.050700 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.050741 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.050751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.051731 4796 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1" exitCode=0 Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.051796 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1"} Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.051942 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.052593 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.052609 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.052618 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.054261 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ced98d292165ffd0447e73f7e0602c556744ead05af25fcb331c676fbec5998d"} Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.054475 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8101271bf7294e7870bda826831cfe83ac569186043e35df696bfa5fd9ddcef5"} Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.054487 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"464e352ac46b2137117bccb9e78a428dc04fdeaef46f4b91f6ce209776f0a31d"} Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.054551 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.054988 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.055005 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.055013 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.056531 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff"} Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.056554 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a"} Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.056564 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81"} Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.056619 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.057253 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.057273 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.057281 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.059214 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea"} Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.059234 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361"} Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.059243 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b"} Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.059252 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2"} Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.059262 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9"} Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.059328 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.071490 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.071512 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.071522 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.233605 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.344099 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.847971 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 10:27:36 crc kubenswrapper[4796]: I1205 10:27:36.855232 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.063884 4796 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd" exitCode=0 Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.063966 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.063981 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd"} Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.064006 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.063997 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.064121 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.064835 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.064865 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.064876 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.064843 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.064969 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.064981 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.066092 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.066161 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.066175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.198724 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.199745 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.199774 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.199785 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.199806 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.328426 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.328603 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.329675 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.329720 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:37 crc kubenswrapper[4796]: I1205 10:27:37.329731 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:38 crc kubenswrapper[4796]: I1205 10:27:38.069612 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 10:27:38 crc kubenswrapper[4796]: I1205 10:27:38.069660 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:38 crc kubenswrapper[4796]: I1205 10:27:38.070123 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b"} Dec 05 10:27:38 crc kubenswrapper[4796]: I1205 10:27:38.070157 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a"} Dec 05 10:27:38 crc kubenswrapper[4796]: I1205 10:27:38.070165 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1"} Dec 05 10:27:38 crc kubenswrapper[4796]: I1205 10:27:38.070172 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a"} Dec 05 10:27:38 crc kubenswrapper[4796]: I1205 10:27:38.070179 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29"} Dec 05 10:27:38 crc kubenswrapper[4796]: I1205 10:27:38.070241 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:38 crc kubenswrapper[4796]: I1205 10:27:38.070791 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:38 crc kubenswrapper[4796]: I1205 10:27:38.070824 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:38 crc kubenswrapper[4796]: I1205 10:27:38.070835 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:38 crc kubenswrapper[4796]: I1205 10:27:38.071344 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:38 crc kubenswrapper[4796]: I1205 10:27:38.071376 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:38 crc kubenswrapper[4796]: I1205 10:27:38.071384 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:38 crc kubenswrapper[4796]: I1205 10:27:38.267183 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:27:38 crc kubenswrapper[4796]: I1205 10:27:38.267345 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 10:27:38 crc kubenswrapper[4796]: I1205 10:27:38.267383 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:38 crc kubenswrapper[4796]: I1205 10:27:38.268398 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:38 crc kubenswrapper[4796]: I1205 10:27:38.268440 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:38 crc kubenswrapper[4796]: I1205 10:27:38.268447 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:38 crc kubenswrapper[4796]: I1205 10:27:38.963526 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:27:39 crc kubenswrapper[4796]: I1205 10:27:39.072175 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:39 crc kubenswrapper[4796]: I1205 10:27:39.072815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:39 crc kubenswrapper[4796]: I1205 10:27:39.072840 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:39 crc kubenswrapper[4796]: I1205 10:27:39.072847 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:39 crc kubenswrapper[4796]: I1205 10:27:39.477045 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 10:27:39 crc kubenswrapper[4796]: I1205 10:27:39.477149 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:39 crc kubenswrapper[4796]: I1205 10:27:39.478061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:39 crc kubenswrapper[4796]: I1205 10:27:39.478284 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:39 crc kubenswrapper[4796]: I1205 10:27:39.478355 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:41 crc kubenswrapper[4796]: I1205 10:27:41.073752 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 05 10:27:41 crc kubenswrapper[4796]: I1205 10:27:41.073870 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:41 crc kubenswrapper[4796]: I1205 10:27:41.075474 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:41 crc kubenswrapper[4796]: I1205 10:27:41.075516 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:41 crc kubenswrapper[4796]: I1205 10:27:41.075527 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:41 crc kubenswrapper[4796]: I1205 10:27:41.513588 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 05 10:27:41 crc kubenswrapper[4796]: I1205 10:27:41.609189 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 10:27:41 crc kubenswrapper[4796]: I1205 10:27:41.609355 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:41 crc kubenswrapper[4796]: I1205 10:27:41.610376 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:41 crc kubenswrapper[4796]: I1205 10:27:41.610430 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:41 crc kubenswrapper[4796]: I1205 10:27:41.610439 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:42 crc kubenswrapper[4796]: I1205 10:27:42.078779 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:42 crc kubenswrapper[4796]: I1205 10:27:42.079350 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:42 crc kubenswrapper[4796]: I1205 10:27:42.079396 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:42 crc kubenswrapper[4796]: I1205 10:27:42.079417 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:44 crc kubenswrapper[4796]: E1205 10:27:44.092310 4796 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 05 10:27:44 crc kubenswrapper[4796]: I1205 10:27:44.610181 4796 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 05 10:27:44 crc kubenswrapper[4796]: I1205 10:27:44.610238 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 05 10:27:45 crc kubenswrapper[4796]: I1205 10:27:45.654462 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 05 10:27:45 crc kubenswrapper[4796]: I1205 10:27:45.654517 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 05 10:27:45 crc kubenswrapper[4796]: I1205 10:27:45.658504 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 05 10:27:45 crc kubenswrapper[4796]: I1205 10:27:45.658581 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 05 10:27:46 crc kubenswrapper[4796]: I1205 10:27:46.349638 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]log ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]etcd ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/generic-apiserver-start-informers ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/priority-and-fairness-filter ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/start-apiextensions-informers ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/start-apiextensions-controllers ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/crd-informer-synced ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/start-system-namespaces-controller ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 05 10:27:46 crc kubenswrapper[4796]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Dec 05 10:27:46 crc kubenswrapper[4796]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/bootstrap-controller ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/start-kube-aggregator-informers ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/apiservice-registration-controller ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/apiservice-discovery-controller ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]autoregister-completion ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/apiservice-openapi-controller ok Dec 05 10:27:46 crc kubenswrapper[4796]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 05 10:27:46 crc kubenswrapper[4796]: livez check failed Dec 05 10:27:46 crc kubenswrapper[4796]: I1205 10:27:46.349724 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 10:27:48 crc kubenswrapper[4796]: I1205 10:27:48.964402 4796 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 05 10:27:48 crc kubenswrapper[4796]: I1205 10:27:48.964478 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 05 10:27:49 crc kubenswrapper[4796]: I1205 10:27:49.481463 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 10:27:49 crc kubenswrapper[4796]: I1205 10:27:49.481583 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:49 crc kubenswrapper[4796]: I1205 10:27:49.482428 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:49 crc kubenswrapper[4796]: I1205 10:27:49.482452 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:49 crc kubenswrapper[4796]: I1205 10:27:49.482460 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:50 crc kubenswrapper[4796]: E1205 10:27:50.658518 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.660411 4796 trace.go:236] Trace[1092391045]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Dec-2025 10:27:38.000) (total time: 12659ms): Dec 05 10:27:50 crc kubenswrapper[4796]: Trace[1092391045]: ---"Objects listed" error: 12659ms (10:27:50.660) Dec 05 10:27:50 crc kubenswrapper[4796]: Trace[1092391045]: [12.659426097s] [12.659426097s] END Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.660454 4796 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.664126 4796 trace.go:236] Trace[1780062517]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Dec-2025 10:27:38.493) (total time: 12170ms): Dec 05 10:27:50 crc kubenswrapper[4796]: Trace[1780062517]: ---"Objects listed" error: 12170ms (10:27:50.664) Dec 05 10:27:50 crc kubenswrapper[4796]: Trace[1780062517]: [12.170918336s] [12.170918336s] END Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.664147 4796 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 05 10:27:50 crc kubenswrapper[4796]: E1205 10:27:50.664411 4796 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.664663 4796 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.664735 4796 trace.go:236] Trace[1235050528]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Dec-2025 10:27:38.154) (total time: 12509ms): Dec 05 10:27:50 crc kubenswrapper[4796]: Trace[1235050528]: ---"Objects listed" error: 12509ms (10:27:50.664) Dec 05 10:27:50 crc kubenswrapper[4796]: Trace[1235050528]: [12.509836765s] [12.509836765s] END Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.664748 4796 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.667771 4796 trace.go:236] Trace[359096681]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (05-Dec-2025 10:27:38.051) (total time: 12615ms): Dec 05 10:27:50 crc kubenswrapper[4796]: Trace[359096681]: ---"Objects listed" error: 12615ms (10:27:50.667) Dec 05 10:27:50 crc kubenswrapper[4796]: Trace[359096681]: [12.615913934s] [12.615913934s] END Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.667797 4796 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.991522 4796 apiserver.go:52] "Watching apiserver" Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.994106 4796 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.994389 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.994810 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.994887 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.994901 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.994989 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:27:50 crc kubenswrapper[4796]: E1205 10:27:50.995035 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.995193 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 10:27:50 crc kubenswrapper[4796]: E1205 10:27:50.995251 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.995293 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:27:50 crc kubenswrapper[4796]: E1205 10:27:50.995350 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.996357 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.996635 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.997942 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.998182 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.998236 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.998808 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.998933 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.999524 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 05 10:27:50 crc kubenswrapper[4796]: I1205 10:27:50.999662 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.017473 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.028783 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.035671 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.044223 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.050712 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.057103 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.062605 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.091927 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.092956 4796 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.102770 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.103464 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.108340 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.108495 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.109868 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea" exitCode=255 Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.109915 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea"} Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.118635 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.125579 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.138466 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.154930 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.167775 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.167942 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.167971 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.167991 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168006 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168022 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168038 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168052 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168066 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168107 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168141 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168154 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168170 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168185 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168201 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168214 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168234 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168251 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168265 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168282 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168298 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168294 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168315 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168371 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168390 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168383 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168467 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168497 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168522 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168540 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168558 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168575 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168591 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168608 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168626 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168651 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168670 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168704 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168722 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168741 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168790 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168812 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168830 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168846 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168882 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168899 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168929 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168947 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168963 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168979 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168994 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169031 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169049 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169066 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169087 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169108 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169124 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169142 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169160 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169177 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169194 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169211 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169229 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169252 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169270 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169287 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169304 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169321 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169337 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169355 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169372 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169390 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169459 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169478 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169496 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169520 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169538 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169558 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169575 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169593 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169611 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169628 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169648 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169667 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169699 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169717 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169734 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169751 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169769 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169784 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169819 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169839 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169857 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169874 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169895 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169913 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169931 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169948 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169964 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170001 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170018 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170034 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170050 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170067 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170087 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170102 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170119 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170139 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170187 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170207 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170227 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170247 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170281 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170327 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170346 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170366 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170388 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170521 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170544 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170562 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170579 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168404 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170598 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168471 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170616 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168481 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168591 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170635 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170654 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170697 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170717 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170736 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170754 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170770 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170788 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170804 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170845 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170866 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170885 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170901 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170921 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170939 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170956 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170972 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170990 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171008 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171026 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171042 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171059 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171078 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171095 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171120 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171140 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171161 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171177 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171193 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171209 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171226 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171243 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171259 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171276 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171294 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171311 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171329 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171345 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171363 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171380 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171401 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171426 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171442 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171460 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171477 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171495 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171513 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171530 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171547 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171565 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171581 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171598 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171617 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171634 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171651 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171667 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171884 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171905 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171924 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171942 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171961 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171979 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171998 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172016 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172034 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172054 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172074 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172093 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172112 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172131 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172148 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172167 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172185 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172203 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172219 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172239 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172285 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172311 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172331 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172354 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172380 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172400 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172433 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172458 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172483 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172500 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172521 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172542 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172560 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172578 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172656 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172669 4796 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172696 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172707 4796 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172718 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172728 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172737 4796 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.177137 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.180201 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.180846 4796 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168604 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.181511 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168629 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168626 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168715 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168758 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.168834 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169029 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169043 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169065 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169093 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169212 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169271 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169294 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169384 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169394 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169401 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169446 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169480 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169613 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169591 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.182191 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169611 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169666 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169792 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169853 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169897 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.169969 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170060 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170075 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170095 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170103 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170164 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170246 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170264 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170311 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170380 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170423 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170483 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170343 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170513 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170634 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170643 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170758 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170804 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170897 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170985 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.170988 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171040 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171103 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171136 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171141 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171508 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171522 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172209 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.171567 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172368 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172590 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172627 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.172816 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.172861 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.173758 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:27:51.673739717 +0000 UTC m=+17.961845229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.185084 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.185111 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.185274 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.175113 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.175134 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.175237 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.175319 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.175474 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.175795 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.175801 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.175875 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.175986 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.176061 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.176040 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.176093 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.176306 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.176364 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.176776 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.176816 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.177248 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.177361 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.177533 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.177775 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.177828 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.177895 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.178485 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.179172 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.179195 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.180281 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.185695 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 10:27:51.685658037 +0000 UTC m=+17.973763551 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.185765 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 10:27:51.685757314 +0000 UTC m=+17.973862828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.180890 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.185900 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.185940 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.185993 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.186077 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.186214 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.186222 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.186406 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.188635 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.188846 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.188871 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.189060 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.189119 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.191224 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.191268 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.191285 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.191337 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 10:27:51.691324046 +0000 UTC m=+17.979429569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.191459 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.191479 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.191495 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.191524 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 10:27:51.69151222 +0000 UTC m=+17.979617733 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.192675 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.192881 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.193158 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.193166 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.193789 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.193839 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.193847 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.193910 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.194524 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.194663 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.194722 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.194925 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.195308 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.195448 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.193350 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.195543 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.195549 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.195970 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.195196 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.196126 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.194931 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.196367 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.196596 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.196658 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.196664 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.195655 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.196661 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.196734 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.196766 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.196888 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.197021 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.197183 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.197455 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.197546 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.197567 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.197593 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.197605 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.197785 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.197815 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.197820 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.197827 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.197083 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.197980 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.198196 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.198260 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.198312 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.198654 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.198865 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.199557 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.199615 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.199793 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.197957 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.197072 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.200038 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.200094 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.200265 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.200741 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.201260 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.201389 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.201424 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.201402 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.201669 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.201945 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.201971 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.202554 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.202172 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.201759 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.202331 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.202479 4796 scope.go:117] "RemoveContainer" containerID="3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.201924 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.202726 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.202936 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.203097 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.203753 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.203817 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.203832 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.204779 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.205294 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.204007 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.205900 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.207355 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.207363 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.207561 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.209251 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.209335 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.209540 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.209754 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.209867 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.210818 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.211411 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.214452 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.215434 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.221213 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.224006 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.231173 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.233364 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.236162 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.239304 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.240211 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.247879 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.258831 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273396 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273437 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273476 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273550 4796 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273564 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273574 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273583 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273593 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273602 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273610 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273618 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273627 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273636 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273633 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273645 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273717 4796 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273729 4796 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273738 4796 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273749 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273757 4796 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273765 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273773 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273780 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273789 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273796 4796 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273805 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273813 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273820 4796 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273828 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273836 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273843 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273851 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273859 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273867 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273875 4796 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273883 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273891 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273899 4796 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273909 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273917 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273927 4796 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273936 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273944 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273954 4796 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273962 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273970 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273978 4796 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273987 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.273995 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274002 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274013 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274021 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274029 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274036 4796 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274043 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274050 4796 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274057 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274065 4796 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274072 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274079 4796 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274087 4796 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274095 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274102 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274109 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274117 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274125 4796 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274133 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274143 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274151 4796 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274158 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274166 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274173 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274182 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274190 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274199 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274207 4796 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274215 4796 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274223 4796 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274231 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274239 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274246 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274254 4796 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274263 4796 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274270 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274277 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274284 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274292 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274299 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274307 4796 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274314 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274322 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274329 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274336 4796 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274344 4796 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274352 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274364 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274372 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274379 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274388 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274396 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274404 4796 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274411 4796 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274431 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274438 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274445 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274452 4796 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274461 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274469 4796 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274476 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274485 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274492 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274501 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274509 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274517 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274525 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274533 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274542 4796 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274549 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274556 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274564 4796 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274571 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274579 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274586 4796 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274595 4796 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274603 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274610 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274618 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274627 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274634 4796 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274643 4796 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274651 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274659 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274668 4796 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274676 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274699 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274708 4796 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274716 4796 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274725 4796 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274733 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274742 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274751 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274760 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274768 4796 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274776 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274784 4796 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274791 4796 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274799 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274806 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274814 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274822 4796 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274830 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274837 4796 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274844 4796 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274852 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274860 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274868 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274875 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274883 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274891 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274908 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274916 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274924 4796 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274931 4796 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274938 4796 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274947 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274955 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274963 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274970 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274978 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274988 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.274995 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.275003 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.275011 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.275019 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.275027 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.275035 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.275042 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.275049 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.275057 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.275065 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.275073 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.275081 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.275089 4796 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.275095 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.275103 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.275111 4796 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.275119 4796 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.275127 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.275134 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.275141 4796 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.275149 4796 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.306437 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.312012 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 05 10:27:51 crc kubenswrapper[4796]: W1205 10:27:51.317644 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-963048aaa46f628c9a6c5858e8e24373a75cb0ef2fcc248a853796538fab49a0 WatchSource:0}: Error finding container 963048aaa46f628c9a6c5858e8e24373a75cb0ef2fcc248a853796538fab49a0: Status 404 returned error can't find the container with id 963048aaa46f628c9a6c5858e8e24373a75cb0ef2fcc248a853796538fab49a0 Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.317743 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 05 10:27:51 crc kubenswrapper[4796]: W1205 10:27:51.329439 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-5b59e82569dc465a95e6e91164c17cc32c903f6518d2c2e5ec361c81018e468e WatchSource:0}: Error finding container 5b59e82569dc465a95e6e91164c17cc32c903f6518d2c2e5ec361c81018e468e: Status 404 returned error can't find the container with id 5b59e82569dc465a95e6e91164c17cc32c903f6518d2c2e5ec361c81018e468e Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.348807 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.350324 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.355760 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.361970 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.367948 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.379349 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.386389 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.393329 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.399182 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.404343 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.612881 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.615561 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.619472 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.621332 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.628500 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.634740 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.640592 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.651563 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.657509 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.664256 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.670260 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.676552 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.676889 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.676982 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:27:52.676965588 +0000 UTC m=+18.965071101 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.686381 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.692434 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.700152 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:51Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.717658 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:51Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.725892 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:51Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.735773 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:51Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.743995 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:51Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.754272 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:51Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.777672 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.777724 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.777745 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:27:51 crc kubenswrapper[4796]: I1205 10:27:51.777762 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.777860 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.777935 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.777936 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 10:27:52.777923167 +0000 UTC m=+19.066028680 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.777966 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.777991 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.778043 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 10:27:52.778026682 +0000 UTC m=+19.066132205 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.777863 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.778073 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.778083 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.778114 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 10:27:52.778107103 +0000 UTC m=+19.066212617 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.778163 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 10:27:51 crc kubenswrapper[4796]: E1205 10:27:51.778190 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 10:27:52.778181233 +0000 UTC m=+19.066286757 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.033203 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.033727 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.034429 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.035000 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.035523 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.036024 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.036565 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.037090 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.037648 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.038127 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.038610 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.039218 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.039661 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.042059 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.042534 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.043334 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.043839 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.044187 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.045040 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.045549 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.045974 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.046830 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.047222 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.048127 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.048500 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.049401 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.049952 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.050385 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.051263 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.051675 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.052642 4796 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.052784 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.054620 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.055971 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.056699 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.058655 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.060073 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.060760 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.061867 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.062489 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.062987 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.064039 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.064997 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.065562 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.066353 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.066901 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.067750 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.068425 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.069198 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.069659 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.070093 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.070904 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.071407 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.072264 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.113477 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f"} Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.113534 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0"} Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.113545 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5fca33116879ae22b7b2c59fbb7827924170ea8ff478b00774b421a8bd852e73"} Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.114488 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4"} Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.114523 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"963048aaa46f628c9a6c5858e8e24373a75cb0ef2fcc248a853796538fab49a0"} Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.118860 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.121124 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf"} Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.121329 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.121814 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5b59e82569dc465a95e6e91164c17cc32c903f6518d2c2e5ec361c81018e468e"} Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.127566 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:27:52 crc kubenswrapper[4796]: E1205 10:27:52.129075 4796 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.134730 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:52Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.158308 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:52Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.180209 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:52Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.194220 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:52Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.213897 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:52Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.225338 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:52Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.235797 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:52Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.246190 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:52Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.256941 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:52Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.267974 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:52Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.278076 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:52Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.288063 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:52Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.297006 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:52Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.311133 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:52Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.322148 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:52Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.331518 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:52Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.342319 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:52Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.351205 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:52Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.684561 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:27:52 crc kubenswrapper[4796]: E1205 10:27:52.685431 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:27:54.685384339 +0000 UTC m=+20.973489853 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.785836 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.785895 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.785931 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:27:52 crc kubenswrapper[4796]: I1205 10:27:52.785961 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:27:52 crc kubenswrapper[4796]: E1205 10:27:52.786086 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 10:27:52 crc kubenswrapper[4796]: E1205 10:27:52.786112 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 10:27:52 crc kubenswrapper[4796]: E1205 10:27:52.786136 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 10:27:52 crc kubenswrapper[4796]: E1205 10:27:52.786152 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 10:27:52 crc kubenswrapper[4796]: E1205 10:27:52.786171 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:27:52 crc kubenswrapper[4796]: E1205 10:27:52.786196 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 10:27:54.786171529 +0000 UTC m=+21.074277052 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 10:27:52 crc kubenswrapper[4796]: E1205 10:27:52.786101 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 10:27:52 crc kubenswrapper[4796]: E1205 10:27:52.786262 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 10:27:52 crc kubenswrapper[4796]: E1205 10:27:52.786277 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:27:52 crc kubenswrapper[4796]: E1205 10:27:52.786240 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 10:27:54.786210683 +0000 UTC m=+21.074316186 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 10:27:52 crc kubenswrapper[4796]: E1205 10:27:52.786334 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 10:27:54.786323204 +0000 UTC m=+21.074428717 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:27:52 crc kubenswrapper[4796]: E1205 10:27:52.786353 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 10:27:54.78634771 +0000 UTC m=+21.074453223 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.030521 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.030628 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:27:53 crc kubenswrapper[4796]: E1205 10:27:53.030644 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.030521 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:27:53 crc kubenswrapper[4796]: E1205 10:27:53.030787 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:27:53 crc kubenswrapper[4796]: E1205 10:27:53.030871 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.865387 4796 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.866815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.866854 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.866863 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.866915 4796 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.871340 4796 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.871550 4796 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.872275 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.872301 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.872312 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.872323 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.872331 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:53Z","lastTransitionTime":"2025-12-05T10:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:53 crc kubenswrapper[4796]: E1205 10:27:53.884495 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:53Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.887132 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.887159 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.887168 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.887179 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.887187 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:53Z","lastTransitionTime":"2025-12-05T10:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:53 crc kubenswrapper[4796]: E1205 10:27:53.895661 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:53Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.897960 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.897981 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.897990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.898002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.898012 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:53Z","lastTransitionTime":"2025-12-05T10:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:53 crc kubenswrapper[4796]: E1205 10:27:53.907086 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:53Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.909632 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.909677 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.909705 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.909717 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.909724 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:53Z","lastTransitionTime":"2025-12-05T10:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:53 crc kubenswrapper[4796]: E1205 10:27:53.918504 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:53Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.921094 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.921132 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.921141 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.921157 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.921167 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:53Z","lastTransitionTime":"2025-12-05T10:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:53 crc kubenswrapper[4796]: E1205 10:27:53.932174 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:53Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:53 crc kubenswrapper[4796]: E1205 10:27:53.932291 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.933663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.933711 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.933721 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.933732 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:53 crc kubenswrapper[4796]: I1205 10:27:53.933740 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:53Z","lastTransitionTime":"2025-12-05T10:27:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.034928 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.034950 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.034959 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.034970 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.034978 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:54Z","lastTransitionTime":"2025-12-05T10:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.042659 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.053237 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.062630 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.072143 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.080994 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.089326 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.102448 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.112857 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.121777 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.126872 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712"} Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.137191 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.137229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.137239 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.137253 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.137265 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:54Z","lastTransitionTime":"2025-12-05T10:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.138394 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.148001 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.160730 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.171137 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.181168 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.191406 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.201508 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.210662 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.225544 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.239110 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.239159 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.239170 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.239186 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.239197 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:54Z","lastTransitionTime":"2025-12-05T10:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.342079 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.342125 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.342135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.342153 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.342165 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:54Z","lastTransitionTime":"2025-12-05T10:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.444483 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.444530 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.444545 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.444562 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.444576 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:54Z","lastTransitionTime":"2025-12-05T10:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.546407 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.546464 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.546474 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.546490 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.546499 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:54Z","lastTransitionTime":"2025-12-05T10:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.647949 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.647972 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.647980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.647991 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.647998 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:54Z","lastTransitionTime":"2025-12-05T10:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.697438 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:27:54 crc kubenswrapper[4796]: E1205 10:27:54.697583 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:27:58.697567267 +0000 UTC m=+24.985672781 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.750058 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.750088 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.750097 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.750111 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.750120 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:54Z","lastTransitionTime":"2025-12-05T10:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.798617 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.798664 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.798710 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.798729 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:27:54 crc kubenswrapper[4796]: E1205 10:27:54.798824 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 10:27:54 crc kubenswrapper[4796]: E1205 10:27:54.798883 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 10:27:58.798868524 +0000 UTC m=+25.086974037 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 10:27:54 crc kubenswrapper[4796]: E1205 10:27:54.799102 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 10:27:54 crc kubenswrapper[4796]: E1205 10:27:54.799126 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 10:27:54 crc kubenswrapper[4796]: E1205 10:27:54.799129 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 10:27:54 crc kubenswrapper[4796]: E1205 10:27:54.799188 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 10:27:58.799174169 +0000 UTC m=+25.087279692 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 10:27:54 crc kubenswrapper[4796]: E1205 10:27:54.799140 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:27:54 crc kubenswrapper[4796]: E1205 10:27:54.799306 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 10:27:54 crc kubenswrapper[4796]: E1205 10:27:54.799382 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 10:27:58.799354308 +0000 UTC m=+25.087459821 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:27:54 crc kubenswrapper[4796]: E1205 10:27:54.799385 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 10:27:54 crc kubenswrapper[4796]: E1205 10:27:54.799435 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:27:54 crc kubenswrapper[4796]: E1205 10:27:54.799463 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 10:27:58.799455358 +0000 UTC m=+25.087560870 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.852404 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.852626 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.852730 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.852795 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.852914 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:54Z","lastTransitionTime":"2025-12-05T10:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.954232 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.954275 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.954306 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.954322 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:54 crc kubenswrapper[4796]: I1205 10:27:54.954332 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:54Z","lastTransitionTime":"2025-12-05T10:27:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.030216 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.030252 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.030230 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:27:55 crc kubenswrapper[4796]: E1205 10:27:55.030331 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:27:55 crc kubenswrapper[4796]: E1205 10:27:55.030415 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:27:55 crc kubenswrapper[4796]: E1205 10:27:55.030494 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.056412 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.056453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.056466 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.056481 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.056490 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:55Z","lastTransitionTime":"2025-12-05T10:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.158512 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.158559 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.158570 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.158585 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.158595 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:55Z","lastTransitionTime":"2025-12-05T10:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.261157 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.261199 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.261208 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.261222 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.261232 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:55Z","lastTransitionTime":"2025-12-05T10:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.362832 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.362876 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.362903 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.362918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.362926 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:55Z","lastTransitionTime":"2025-12-05T10:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.464959 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.464996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.465005 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.465020 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.465031 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:55Z","lastTransitionTime":"2025-12-05T10:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.567093 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.567150 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.567160 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.567173 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.567182 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:55Z","lastTransitionTime":"2025-12-05T10:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.669512 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.669562 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.669572 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.669590 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.669599 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:55Z","lastTransitionTime":"2025-12-05T10:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.682735 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-tz2w5"] Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.683269 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tz2w5" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.685041 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.685131 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.685739 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.694939 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:55Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.705310 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:55Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.715390 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:55Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.738344 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:55Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.761377 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:55Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.771882 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.771913 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.771922 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.771934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.771943 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:55Z","lastTransitionTime":"2025-12-05T10:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.784006 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:55Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.806271 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww4p7\" (UniqueName: \"kubernetes.io/projected/0158bdbd-94bb-4421-b698-bdcbe2e7f37b-kube-api-access-ww4p7\") pod \"node-resolver-tz2w5\" (UID: \"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\") " pod="openshift-dns/node-resolver-tz2w5" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.806312 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0158bdbd-94bb-4421-b698-bdcbe2e7f37b-hosts-file\") pod \"node-resolver-tz2w5\" (UID: \"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\") " pod="openshift-dns/node-resolver-tz2w5" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.809558 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:55Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.822253 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:55Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.833850 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:55Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.843200 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:55Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.874470 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.874539 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.874557 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.874584 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.874605 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:55Z","lastTransitionTime":"2025-12-05T10:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.906946 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww4p7\" (UniqueName: \"kubernetes.io/projected/0158bdbd-94bb-4421-b698-bdcbe2e7f37b-kube-api-access-ww4p7\") pod \"node-resolver-tz2w5\" (UID: \"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\") " pod="openshift-dns/node-resolver-tz2w5" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.907002 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0158bdbd-94bb-4421-b698-bdcbe2e7f37b-hosts-file\") pod \"node-resolver-tz2w5\" (UID: \"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\") " pod="openshift-dns/node-resolver-tz2w5" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.907086 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0158bdbd-94bb-4421-b698-bdcbe2e7f37b-hosts-file\") pod \"node-resolver-tz2w5\" (UID: \"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\") " pod="openshift-dns/node-resolver-tz2w5" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.922248 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww4p7\" (UniqueName: \"kubernetes.io/projected/0158bdbd-94bb-4421-b698-bdcbe2e7f37b-kube-api-access-ww4p7\") pod \"node-resolver-tz2w5\" (UID: \"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\") " pod="openshift-dns/node-resolver-tz2w5" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.976742 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.976782 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.976791 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.976806 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.976816 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:55Z","lastTransitionTime":"2025-12-05T10:27:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:55 crc kubenswrapper[4796]: I1205 10:27:55.994135 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tz2w5" Dec 05 10:27:56 crc kubenswrapper[4796]: W1205 10:27:56.003727 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0158bdbd_94bb_4421_b698_bdcbe2e7f37b.slice/crio-aca6787413084a833473ef9c5053f138847df3a786338fd4c43602b1591a3d53 WatchSource:0}: Error finding container aca6787413084a833473ef9c5053f138847df3a786338fd4c43602b1591a3d53: Status 404 returned error can't find the container with id aca6787413084a833473ef9c5053f138847df3a786338fd4c43602b1591a3d53 Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.051526 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-cqj7h"] Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.051817 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-9pllw"] Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.051978 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.052068 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.052359 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-ct8sh"] Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.052940 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.053869 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.053930 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.054047 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.054145 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.054359 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.055080 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.055352 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.055373 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.056241 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.057000 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.057596 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.057800 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.068135 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.078769 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.079372 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.079471 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.079532 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.079598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.079657 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:56Z","lastTransitionTime":"2025-12-05T10:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.086535 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.097834 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.106201 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.121027 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.133301 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tz2w5" event={"ID":"0158bdbd-94bb-4421-b698-bdcbe2e7f37b","Type":"ContainerStarted","Data":"aca6787413084a833473ef9c5053f138847df3a786338fd4c43602b1591a3d53"} Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.138840 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.152569 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.168974 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.178641 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.181283 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.181315 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.181325 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.181338 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.181345 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:56Z","lastTransitionTime":"2025-12-05T10:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.188773 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.196987 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.207564 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.208927 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ct8sh\" (UID: \"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\") " pod="openshift-multus/multus-additional-cni-plugins-ct8sh" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209000 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-host-var-lib-cni-multus\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209025 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-os-release\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209058 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7d541e60-9b92-4b9d-be51-5bd87e76deac-cni-binary-copy\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209083 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4drh\" (UniqueName: \"kubernetes.io/projected/7796bae1-68a7-44b4-98cc-0dd83da754bc-kube-api-access-j4drh\") pod \"machine-config-daemon-9pllw\" (UID: \"7796bae1-68a7-44b4-98cc-0dd83da754bc\") " pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209097 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ct8sh\" (UID: \"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\") " pod="openshift-multus/multus-additional-cni-plugins-ct8sh" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209146 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-host-var-lib-kubelet\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209163 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-system-cni-dir\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209199 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-host-var-lib-cni-bin\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209213 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7-cnibin\") pod \"multus-additional-cni-plugins-ct8sh\" (UID: \"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\") " pod="openshift-multus/multus-additional-cni-plugins-ct8sh" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209227 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chrfj\" (UniqueName: \"kubernetes.io/projected/e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7-kube-api-access-chrfj\") pod \"multus-additional-cni-plugins-ct8sh\" (UID: \"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\") " pod="openshift-multus/multus-additional-cni-plugins-ct8sh" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209243 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7796bae1-68a7-44b4-98cc-0dd83da754bc-mcd-auth-proxy-config\") pod \"machine-config-daemon-9pllw\" (UID: \"7796bae1-68a7-44b4-98cc-0dd83da754bc\") " pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209258 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-multus-cni-dir\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209271 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-multus-conf-dir\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209322 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7-cni-binary-copy\") pod \"multus-additional-cni-plugins-ct8sh\" (UID: \"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\") " pod="openshift-multus/multus-additional-cni-plugins-ct8sh" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209374 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7796bae1-68a7-44b4-98cc-0dd83da754bc-rootfs\") pod \"machine-config-daemon-9pllw\" (UID: \"7796bae1-68a7-44b4-98cc-0dd83da754bc\") " pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209401 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7796bae1-68a7-44b4-98cc-0dd83da754bc-proxy-tls\") pod \"machine-config-daemon-9pllw\" (UID: \"7796bae1-68a7-44b4-98cc-0dd83da754bc\") " pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209439 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-cnibin\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209464 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-hostroot\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209498 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-multus-socket-dir-parent\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209517 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-host-run-netns\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209570 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-etc-kubernetes\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209596 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmx6j\" (UniqueName: \"kubernetes.io/projected/7d541e60-9b92-4b9d-be51-5bd87e76deac-kube-api-access-kmx6j\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209631 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-host-run-k8s-cni-cncf-io\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209652 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-host-run-multus-certs\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209669 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7d541e60-9b92-4b9d-be51-5bd87e76deac-multus-daemon-config\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209701 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7-system-cni-dir\") pod \"multus-additional-cni-plugins-ct8sh\" (UID: \"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\") " pod="openshift-multus/multus-additional-cni-plugins-ct8sh" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.209718 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7-os-release\") pod \"multus-additional-cni-plugins-ct8sh\" (UID: \"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\") " pod="openshift-multus/multus-additional-cni-plugins-ct8sh" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.216937 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.229269 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.237584 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.245753 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.254554 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.262839 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.271032 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.277658 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.282976 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.283008 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.283019 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.283032 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.283041 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:56Z","lastTransitionTime":"2025-12-05T10:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.285490 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.293893 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.302530 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.308952 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310149 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-etc-kubernetes\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310177 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmx6j\" (UniqueName: \"kubernetes.io/projected/7d541e60-9b92-4b9d-be51-5bd87e76deac-kube-api-access-kmx6j\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310200 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-host-run-k8s-cni-cncf-io\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310217 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-host-run-multus-certs\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310236 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7d541e60-9b92-4b9d-be51-5bd87e76deac-multus-daemon-config\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310254 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7-system-cni-dir\") pod \"multus-additional-cni-plugins-ct8sh\" (UID: \"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\") " pod="openshift-multus/multus-additional-cni-plugins-ct8sh" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310259 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-host-run-multus-certs\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310270 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7-os-release\") pod \"multus-additional-cni-plugins-ct8sh\" (UID: \"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\") " pod="openshift-multus/multus-additional-cni-plugins-ct8sh" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310306 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ct8sh\" (UID: \"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\") " pod="openshift-multus/multus-additional-cni-plugins-ct8sh" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310323 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7-os-release\") pod \"multus-additional-cni-plugins-ct8sh\" (UID: \"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\") " pod="openshift-multus/multus-additional-cni-plugins-ct8sh" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310331 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-host-var-lib-cni-multus\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310215 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-etc-kubernetes\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310347 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-os-release\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310363 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7d541e60-9b92-4b9d-be51-5bd87e76deac-cni-binary-copy\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310384 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-host-var-lib-cni-multus\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310387 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4drh\" (UniqueName: \"kubernetes.io/projected/7796bae1-68a7-44b4-98cc-0dd83da754bc-kube-api-access-j4drh\") pod \"machine-config-daemon-9pllw\" (UID: \"7796bae1-68a7-44b4-98cc-0dd83da754bc\") " pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310239 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-host-run-k8s-cni-cncf-io\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310435 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ct8sh\" (UID: \"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\") " pod="openshift-multus/multus-additional-cni-plugins-ct8sh" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310453 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-host-var-lib-kubelet\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310467 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chrfj\" (UniqueName: \"kubernetes.io/projected/e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7-kube-api-access-chrfj\") pod \"multus-additional-cni-plugins-ct8sh\" (UID: \"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\") " pod="openshift-multus/multus-additional-cni-plugins-ct8sh" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310482 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-system-cni-dir\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310495 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-host-var-lib-cni-bin\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310511 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7-cnibin\") pod \"multus-additional-cni-plugins-ct8sh\" (UID: \"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\") " pod="openshift-multus/multus-additional-cni-plugins-ct8sh" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310510 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-os-release\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310524 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-multus-conf-dir\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310547 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7-cni-binary-copy\") pod \"multus-additional-cni-plugins-ct8sh\" (UID: \"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\") " pod="openshift-multus/multus-additional-cni-plugins-ct8sh" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310565 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7796bae1-68a7-44b4-98cc-0dd83da754bc-mcd-auth-proxy-config\") pod \"machine-config-daemon-9pllw\" (UID: \"7796bae1-68a7-44b4-98cc-0dd83da754bc\") " pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310579 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-multus-cni-dir\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310594 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-host-var-lib-kubelet\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310600 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7796bae1-68a7-44b4-98cc-0dd83da754bc-rootfs\") pod \"machine-config-daemon-9pllw\" (UID: \"7796bae1-68a7-44b4-98cc-0dd83da754bc\") " pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310621 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7796bae1-68a7-44b4-98cc-0dd83da754bc-proxy-tls\") pod \"machine-config-daemon-9pllw\" (UID: \"7796bae1-68a7-44b4-98cc-0dd83da754bc\") " pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310624 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-multus-conf-dir\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310636 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-cnibin\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310645 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-host-var-lib-cni-bin\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310650 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-hostroot\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310666 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7-cnibin\") pod \"multus-additional-cni-plugins-ct8sh\" (UID: \"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\") " pod="openshift-multus/multus-additional-cni-plugins-ct8sh" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310666 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-multus-socket-dir-parent\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310707 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-host-run-netns\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310718 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-multus-socket-dir-parent\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310599 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-system-cni-dir\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310747 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-host-run-netns\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310770 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7-system-cni-dir\") pod \"multus-additional-cni-plugins-ct8sh\" (UID: \"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\") " pod="openshift-multus/multus-additional-cni-plugins-ct8sh" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310812 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-multus-cni-dir\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310841 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-cnibin\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310978 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7d541e60-9b92-4b9d-be51-5bd87e76deac-multus-daemon-config\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.310970 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ct8sh\" (UID: \"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\") " pod="openshift-multus/multus-additional-cni-plugins-ct8sh" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.311024 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7796bae1-68a7-44b4-98cc-0dd83da754bc-rootfs\") pod \"machine-config-daemon-9pllw\" (UID: \"7796bae1-68a7-44b4-98cc-0dd83da754bc\") " pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.311033 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7d541e60-9b92-4b9d-be51-5bd87e76deac-hostroot\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.311180 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7d541e60-9b92-4b9d-be51-5bd87e76deac-cni-binary-copy\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.311239 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ct8sh\" (UID: \"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\") " pod="openshift-multus/multus-additional-cni-plugins-ct8sh" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.311256 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7-cni-binary-copy\") pod \"multus-additional-cni-plugins-ct8sh\" (UID: \"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\") " pod="openshift-multus/multus-additional-cni-plugins-ct8sh" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.311358 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7796bae1-68a7-44b4-98cc-0dd83da754bc-mcd-auth-proxy-config\") pod \"machine-config-daemon-9pllw\" (UID: \"7796bae1-68a7-44b4-98cc-0dd83da754bc\") " pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.313488 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7796bae1-68a7-44b4-98cc-0dd83da754bc-proxy-tls\") pod \"machine-config-daemon-9pllw\" (UID: \"7796bae1-68a7-44b4-98cc-0dd83da754bc\") " pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.322988 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4drh\" (UniqueName: \"kubernetes.io/projected/7796bae1-68a7-44b4-98cc-0dd83da754bc-kube-api-access-j4drh\") pod \"machine-config-daemon-9pllw\" (UID: \"7796bae1-68a7-44b4-98cc-0dd83da754bc\") " pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.323440 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chrfj\" (UniqueName: \"kubernetes.io/projected/e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7-kube-api-access-chrfj\") pod \"multus-additional-cni-plugins-ct8sh\" (UID: \"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\") " pod="openshift-multus/multus-additional-cni-plugins-ct8sh" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.323921 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmx6j\" (UniqueName: \"kubernetes.io/projected/7d541e60-9b92-4b9d-be51-5bd87e76deac-kube-api-access-kmx6j\") pod \"multus-cqj7h\" (UID: \"7d541e60-9b92-4b9d-be51-5bd87e76deac\") " pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.362994 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-cqj7h" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.367023 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 10:27:56 crc kubenswrapper[4796]: W1205 10:27:56.371900 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d541e60_9b92_4b9d_be51_5bd87e76deac.slice/crio-41f8351a70a3bff059f38a3f13adb25463ad1b6d31fc28dea121aec0228baa71 WatchSource:0}: Error finding container 41f8351a70a3bff059f38a3f13adb25463ad1b6d31fc28dea121aec0228baa71: Status 404 returned error can't find the container with id 41f8351a70a3bff059f38a3f13adb25463ad1b6d31fc28dea121aec0228baa71 Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.372666 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" Dec 05 10:27:56 crc kubenswrapper[4796]: W1205 10:27:56.377229 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7796bae1_68a7_44b4_98cc_0dd83da754bc.slice/crio-eb6eb5b0c617de1bd2c77270d48a42fadb0739b2b241c88e5afd2c9e7d9927f8 WatchSource:0}: Error finding container eb6eb5b0c617de1bd2c77270d48a42fadb0739b2b241c88e5afd2c9e7d9927f8: Status 404 returned error can't find the container with id eb6eb5b0c617de1bd2c77270d48a42fadb0739b2b241c88e5afd2c9e7d9927f8 Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.384586 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.384615 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.384625 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.384639 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.384648 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:56Z","lastTransitionTime":"2025-12-05T10:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.426186 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xvb5x"] Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.426936 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.429187 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.429253 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.429325 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.429380 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.429533 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.431809 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.434342 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.442537 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.484295 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.486095 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.486122 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.486130 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.486144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.486154 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:56Z","lastTransitionTime":"2025-12-05T10:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.499012 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.510002 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.512339 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-log-socket\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.512383 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-run-ovn-kubernetes\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.512401 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22cc2\" (UniqueName: \"kubernetes.io/projected/d158ce1c-6415-4e69-a1fe-862330b25ff3-kube-api-access-22cc2\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.512433 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-kubelet\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.512447 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-run-netns\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.512464 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d158ce1c-6415-4e69-a1fe-862330b25ff3-ovnkube-script-lib\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.512488 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-var-lib-openvswitch\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.512503 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-systemd-units\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.512525 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d158ce1c-6415-4e69-a1fe-862330b25ff3-ovnkube-config\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.512540 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d158ce1c-6415-4e69-a1fe-862330b25ff3-env-overrides\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.512556 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-etc-openvswitch\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.512570 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-run-systemd\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.512594 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-run-openvswitch\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.512608 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-node-log\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.512622 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.512644 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-cni-netd\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.512708 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-slash\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.512729 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d158ce1c-6415-4e69-a1fe-862330b25ff3-ovn-node-metrics-cert\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.512838 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-cni-bin\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.512988 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-run-ovn\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.520405 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.531597 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.540259 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.554430 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.564935 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.573577 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.582260 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.588618 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.588652 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.588660 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.588672 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.588694 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:56Z","lastTransitionTime":"2025-12-05T10:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.591033 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.604196 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.612965 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:56Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614224 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-cni-netd\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614256 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-slash\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614274 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d158ce1c-6415-4e69-a1fe-862330b25ff3-ovn-node-metrics-cert\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614289 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-cni-bin\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614313 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-run-ovn\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614329 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-log-socket\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614342 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-run-ovn-kubernetes\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614358 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22cc2\" (UniqueName: \"kubernetes.io/projected/d158ce1c-6415-4e69-a1fe-862330b25ff3-kube-api-access-22cc2\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614372 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d158ce1c-6415-4e69-a1fe-862330b25ff3-ovnkube-script-lib\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614369 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-cni-netd\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614377 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-slash\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614400 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-cni-bin\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614413 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-run-ovn-kubernetes\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614386 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-kubelet\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614455 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-run-ovn\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614461 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-log-socket\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614462 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-run-netns\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614512 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-var-lib-openvswitch\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614405 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-kubelet\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614531 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-systemd-units\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614556 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d158ce1c-6415-4e69-a1fe-862330b25ff3-ovnkube-config\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614564 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-var-lib-openvswitch\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614477 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-run-netns\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614627 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d158ce1c-6415-4e69-a1fe-862330b25ff3-env-overrides\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614672 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-systemd-units\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614696 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-etc-openvswitch\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614731 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-run-systemd\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614751 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614754 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-etc-openvswitch\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614781 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-run-openvswitch\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614797 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-node-log\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614800 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-run-openvswitch\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614814 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614815 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-run-systemd\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614845 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-node-log\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.614999 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d158ce1c-6415-4e69-a1fe-862330b25ff3-ovnkube-script-lib\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.615078 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d158ce1c-6415-4e69-a1fe-862330b25ff3-env-overrides\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.615100 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d158ce1c-6415-4e69-a1fe-862330b25ff3-ovnkube-config\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.617522 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d158ce1c-6415-4e69-a1fe-862330b25ff3-ovn-node-metrics-cert\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.627958 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22cc2\" (UniqueName: \"kubernetes.io/projected/d158ce1c-6415-4e69-a1fe-862330b25ff3-kube-api-access-22cc2\") pod \"ovnkube-node-xvb5x\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.690758 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.690792 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.690802 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.690815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.690823 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:56Z","lastTransitionTime":"2025-12-05T10:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.736277 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:27:56 crc kubenswrapper[4796]: W1205 10:27:56.744893 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd158ce1c_6415_4e69_a1fe_862330b25ff3.slice/crio-da355471901c9263fc643649db8eb7c43f691d3b41c33622a6ef20cb4bbe86de WatchSource:0}: Error finding container da355471901c9263fc643649db8eb7c43f691d3b41c33622a6ef20cb4bbe86de: Status 404 returned error can't find the container with id da355471901c9263fc643649db8eb7c43f691d3b41c33622a6ef20cb4bbe86de Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.792505 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.792534 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.792546 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.792558 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.792566 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:56Z","lastTransitionTime":"2025-12-05T10:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.894570 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.894605 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.894620 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.894633 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.894641 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:56Z","lastTransitionTime":"2025-12-05T10:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.996226 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.996261 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.996270 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.996284 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:56 crc kubenswrapper[4796]: I1205 10:27:56.996292 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:56Z","lastTransitionTime":"2025-12-05T10:27:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.030318 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:27:57 crc kubenswrapper[4796]: E1205 10:27:57.030415 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.030607 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.030606 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:27:57 crc kubenswrapper[4796]: E1205 10:27:57.030828 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:27:57 crc kubenswrapper[4796]: E1205 10:27:57.030840 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.098457 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.098720 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.098780 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.098847 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.098907 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:57Z","lastTransitionTime":"2025-12-05T10:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.136676 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tz2w5" event={"ID":"0158bdbd-94bb-4421-b698-bdcbe2e7f37b","Type":"ContainerStarted","Data":"f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f"} Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.137878 4796 generic.go:334] "Generic (PLEG): container finished" podID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerID="b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe" exitCode=0 Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.137936 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerDied","Data":"b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe"} Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.137957 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerStarted","Data":"da355471901c9263fc643649db8eb7c43f691d3b41c33622a6ef20cb4bbe86de"} Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.139641 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cqj7h" event={"ID":"7d541e60-9b92-4b9d-be51-5bd87e76deac","Type":"ContainerStarted","Data":"4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01"} Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.139667 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cqj7h" event={"ID":"7d541e60-9b92-4b9d-be51-5bd87e76deac","Type":"ContainerStarted","Data":"41f8351a70a3bff059f38a3f13adb25463ad1b6d31fc28dea121aec0228baa71"} Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.140910 4796 generic.go:334] "Generic (PLEG): container finished" podID="e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7" containerID="fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035" exitCode=0 Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.140964 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" event={"ID":"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7","Type":"ContainerDied","Data":"fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035"} Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.140980 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" event={"ID":"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7","Type":"ContainerStarted","Data":"8d39a5914146850f53bb1fc6c05aafcb9b4775c33279f12e12afeb381e9972c9"} Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.142325 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerStarted","Data":"9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7"} Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.142344 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerStarted","Data":"5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add"} Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.142353 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerStarted","Data":"eb6eb5b0c617de1bd2c77270d48a42fadb0739b2b241c88e5afd2c9e7d9927f8"} Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.158661 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.166293 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.180436 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.187951 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.195354 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.202943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.202978 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.202986 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.202999 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.203007 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:57Z","lastTransitionTime":"2025-12-05T10:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.203611 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.212425 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.226838 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.235915 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.244412 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.252161 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.262283 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.271331 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.281089 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.288735 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.295756 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.304627 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.305196 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.305244 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.305255 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.305269 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.305278 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:57Z","lastTransitionTime":"2025-12-05T10:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.313946 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.327648 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.341978 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.350514 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.359188 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.368455 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.377134 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.385697 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.394757 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.402436 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.407619 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.407640 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.407650 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.407661 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.407669 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:57Z","lastTransitionTime":"2025-12-05T10:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.415029 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:57Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.509604 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.509851 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.509860 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.509872 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.509881 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:57Z","lastTransitionTime":"2025-12-05T10:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.611634 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.611697 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.611710 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.611732 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.611746 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:57Z","lastTransitionTime":"2025-12-05T10:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.713812 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.713854 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.713864 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.713879 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.713891 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:57Z","lastTransitionTime":"2025-12-05T10:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.815402 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.815451 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.815461 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.815476 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.815491 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:57Z","lastTransitionTime":"2025-12-05T10:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.916982 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.917020 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.917029 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.917045 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:57 crc kubenswrapper[4796]: I1205 10:27:57.917055 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:57Z","lastTransitionTime":"2025-12-05T10:27:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.019659 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.019727 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.019737 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.019762 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.019772 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:58Z","lastTransitionTime":"2025-12-05T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.121742 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.121789 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.121798 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.121812 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.121821 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:58Z","lastTransitionTime":"2025-12-05T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.148086 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerStarted","Data":"159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8"} Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.148122 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerStarted","Data":"2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d"} Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.148133 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerStarted","Data":"d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555"} Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.148142 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerStarted","Data":"5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589"} Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.148149 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerStarted","Data":"94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f"} Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.148157 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerStarted","Data":"3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae"} Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.149740 4796 generic.go:334] "Generic (PLEG): container finished" podID="e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7" containerID="081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18" exitCode=0 Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.149810 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" event={"ID":"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7","Type":"ContainerDied","Data":"081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18"} Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.164388 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.173381 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.186852 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.197245 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.208656 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.217744 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.223809 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.223847 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.223860 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.223879 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.223893 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:58Z","lastTransitionTime":"2025-12-05T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.226374 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.235134 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.243757 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.257009 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.266012 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.273128 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.281444 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.290249 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.325861 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.325899 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.325908 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.325923 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.325931 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:58Z","lastTransitionTime":"2025-12-05T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.429216 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.429252 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.429261 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.429275 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.429283 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:58Z","lastTransitionTime":"2025-12-05T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.531485 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.531520 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.531529 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.531546 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.531554 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:58Z","lastTransitionTime":"2025-12-05T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.609185 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-sbxd4"] Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.609536 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-sbxd4" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.611212 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.611520 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.611575 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.611874 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.622534 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.631050 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.633230 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.633261 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.633270 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.633286 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.633294 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:58Z","lastTransitionTime":"2025-12-05T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.640557 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.653328 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.663175 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.671089 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.680212 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.690867 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.697499 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.705351 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.712196 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.720561 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.728938 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.735275 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.735316 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.735341 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.735350 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.735365 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.735376 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/057c024f-9d49-40a3-81b5-e6fa91b46d53-serviceca\") pod \"node-ca-sbxd4\" (UID: \"057c024f-9d49-40a3-81b5-e6fa91b46d53\") " pod="openshift-image-registry/node-ca-sbxd4" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.735375 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:58Z","lastTransitionTime":"2025-12-05T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.735398 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdjhl\" (UniqueName: \"kubernetes.io/projected/057c024f-9d49-40a3-81b5-e6fa91b46d53-kube-api-access-rdjhl\") pod \"node-ca-sbxd4\" (UID: \"057c024f-9d49-40a3-81b5-e6fa91b46d53\") " pod="openshift-image-registry/node-ca-sbxd4" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.735438 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/057c024f-9d49-40a3-81b5-e6fa91b46d53-host\") pod \"node-ca-sbxd4\" (UID: \"057c024f-9d49-40a3-81b5-e6fa91b46d53\") " pod="openshift-image-registry/node-ca-sbxd4" Dec 05 10:27:58 crc kubenswrapper[4796]: E1205 10:27:58.735504 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:28:06.735477313 +0000 UTC m=+33.023582826 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.746336 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.754723 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:58Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.836298 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdjhl\" (UniqueName: \"kubernetes.io/projected/057c024f-9d49-40a3-81b5-e6fa91b46d53-kube-api-access-rdjhl\") pod \"node-ca-sbxd4\" (UID: \"057c024f-9d49-40a3-81b5-e6fa91b46d53\") " pod="openshift-image-registry/node-ca-sbxd4" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.836349 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.836378 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/057c024f-9d49-40a3-81b5-e6fa91b46d53-host\") pod \"node-ca-sbxd4\" (UID: \"057c024f-9d49-40a3-81b5-e6fa91b46d53\") " pod="openshift-image-registry/node-ca-sbxd4" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.836404 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.836451 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.836472 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.836490 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/057c024f-9d49-40a3-81b5-e6fa91b46d53-serviceca\") pod \"node-ca-sbxd4\" (UID: \"057c024f-9d49-40a3-81b5-e6fa91b46d53\") " pod="openshift-image-registry/node-ca-sbxd4" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.836544 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/057c024f-9d49-40a3-81b5-e6fa91b46d53-host\") pod \"node-ca-sbxd4\" (UID: \"057c024f-9d49-40a3-81b5-e6fa91b46d53\") " pod="openshift-image-registry/node-ca-sbxd4" Dec 05 10:27:58 crc kubenswrapper[4796]: E1205 10:27:58.836582 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 10:27:58 crc kubenswrapper[4796]: E1205 10:27:58.836593 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 10:27:58 crc kubenswrapper[4796]: E1205 10:27:58.836623 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 10:27:58 crc kubenswrapper[4796]: E1205 10:27:58.836605 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 10:27:58 crc kubenswrapper[4796]: E1205 10:27:58.836679 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 10:27:58 crc kubenswrapper[4796]: E1205 10:27:58.836715 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 10:27:58 crc kubenswrapper[4796]: E1205 10:27:58.836709 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:27:58 crc kubenswrapper[4796]: E1205 10:27:58.836728 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:27:58 crc kubenswrapper[4796]: E1205 10:27:58.836672 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 10:28:06.83665633 +0000 UTC m=+33.124761843 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 10:27:58 crc kubenswrapper[4796]: E1205 10:27:58.836783 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 10:28:06.836765464 +0000 UTC m=+33.124870987 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:27:58 crc kubenswrapper[4796]: E1205 10:27:58.836806 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 10:28:06.836797144 +0000 UTC m=+33.124902667 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 10:27:58 crc kubenswrapper[4796]: E1205 10:27:58.836824 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 10:28:06.836815047 +0000 UTC m=+33.124920570 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.837473 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/057c024f-9d49-40a3-81b5-e6fa91b46d53-serviceca\") pod \"node-ca-sbxd4\" (UID: \"057c024f-9d49-40a3-81b5-e6fa91b46d53\") " pod="openshift-image-registry/node-ca-sbxd4" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.837695 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.837725 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.837735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.837748 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.837758 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:58Z","lastTransitionTime":"2025-12-05T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.852418 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdjhl\" (UniqueName: \"kubernetes.io/projected/057c024f-9d49-40a3-81b5-e6fa91b46d53-kube-api-access-rdjhl\") pod \"node-ca-sbxd4\" (UID: \"057c024f-9d49-40a3-81b5-e6fa91b46d53\") " pod="openshift-image-registry/node-ca-sbxd4" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.920480 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-sbxd4" Dec 05 10:27:58 crc kubenswrapper[4796]: W1205 10:27:58.931920 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod057c024f_9d49_40a3_81b5_e6fa91b46d53.slice/crio-aa4e295730c949c7cfb4136277fba2d0364ab4dd8b6d88745c8f0bab4c073a1e WatchSource:0}: Error finding container aa4e295730c949c7cfb4136277fba2d0364ab4dd8b6d88745c8f0bab4c073a1e: Status 404 returned error can't find the container with id aa4e295730c949c7cfb4136277fba2d0364ab4dd8b6d88745c8f0bab4c073a1e Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.939831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.939865 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.939875 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.939889 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:58 crc kubenswrapper[4796]: I1205 10:27:58.939897 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:58Z","lastTransitionTime":"2025-12-05T10:27:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.030444 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.030505 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:27:59 crc kubenswrapper[4796]: E1205 10:27:59.031001 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.030545 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:27:59 crc kubenswrapper[4796]: E1205 10:27:59.031085 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:27:59 crc kubenswrapper[4796]: E1205 10:27:59.031167 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.041701 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.042340 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.042351 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.042368 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.042378 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:59Z","lastTransitionTime":"2025-12-05T10:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.143778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.143814 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.143823 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.143838 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.143850 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:59Z","lastTransitionTime":"2025-12-05T10:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.155941 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-sbxd4" event={"ID":"057c024f-9d49-40a3-81b5-e6fa91b46d53","Type":"ContainerStarted","Data":"1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5"} Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.155985 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-sbxd4" event={"ID":"057c024f-9d49-40a3-81b5-e6fa91b46d53","Type":"ContainerStarted","Data":"aa4e295730c949c7cfb4136277fba2d0364ab4dd8b6d88745c8f0bab4c073a1e"} Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.158253 4796 generic.go:334] "Generic (PLEG): container finished" podID="e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7" containerID="e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243" exitCode=0 Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.158299 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" event={"ID":"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7","Type":"ContainerDied","Data":"e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243"} Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.169246 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.178497 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.193799 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.201984 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.210395 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.219896 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.230173 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.238927 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.245540 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.245578 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.245588 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.245606 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.245618 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:59Z","lastTransitionTime":"2025-12-05T10:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.254464 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.265259 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.274745 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.283447 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.292298 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.301824 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.309545 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.317058 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.324653 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.332545 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.341168 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.347166 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.347190 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.347198 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.347210 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.347218 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:59Z","lastTransitionTime":"2025-12-05T10:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.369778 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.414618 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.448784 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.448820 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.448831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.448845 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.448853 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:59Z","lastTransitionTime":"2025-12-05T10:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.451329 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.494048 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.531005 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.551378 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.551405 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.551413 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.551437 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.551447 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:59Z","lastTransitionTime":"2025-12-05T10:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.571876 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.612059 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.650162 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.653375 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.653399 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.653408 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.653419 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.653437 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:59Z","lastTransitionTime":"2025-12-05T10:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.691184 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.730047 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.755557 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.755601 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.755612 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.755627 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.755636 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:59Z","lastTransitionTime":"2025-12-05T10:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.774853 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:27:59Z is after 2025-08-24T17:21:41Z" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.857901 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.857942 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.857951 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.857968 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.857979 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:59Z","lastTransitionTime":"2025-12-05T10:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.959631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.959664 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.959673 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.959716 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:27:59 crc kubenswrapper[4796]: I1205 10:27:59.959726 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:27:59Z","lastTransitionTime":"2025-12-05T10:27:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.061295 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.061331 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.061341 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.061356 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.061365 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:00Z","lastTransitionTime":"2025-12-05T10:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.162754 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.162782 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.162790 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.162794 4796 generic.go:334] "Generic (PLEG): container finished" podID="e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7" containerID="8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582" exitCode=0 Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.162804 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.162852 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" event={"ID":"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7","Type":"ContainerDied","Data":"8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582"} Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.162871 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:00Z","lastTransitionTime":"2025-12-05T10:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.166632 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerStarted","Data":"96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85"} Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.182615 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:00Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.192567 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:00Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.203672 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:00Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.213391 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:00Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.222556 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:00Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.230673 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:00Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.239388 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:00Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.248471 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:00Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.255841 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:00Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.264802 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.264842 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.264852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.264867 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.264876 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:00Z","lastTransitionTime":"2025-12-05T10:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.268106 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:00Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.275991 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:00Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.282885 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:00Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.291342 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:00Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.334289 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:00Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.368080 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.368123 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.368132 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.368149 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.368159 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:00Z","lastTransitionTime":"2025-12-05T10:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.371470 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:00Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.470583 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.470758 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.470767 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.470780 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.470789 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:00Z","lastTransitionTime":"2025-12-05T10:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.572397 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.572443 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.572453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.572466 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.572475 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:00Z","lastTransitionTime":"2025-12-05T10:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.674127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.674161 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.674170 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.674185 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.674193 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:00Z","lastTransitionTime":"2025-12-05T10:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.775661 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.775708 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.775719 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.775734 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.775742 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:00Z","lastTransitionTime":"2025-12-05T10:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.877464 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.877499 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.877508 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.877521 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.877530 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:00Z","lastTransitionTime":"2025-12-05T10:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.980093 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.980132 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.980141 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.980178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:00 crc kubenswrapper[4796]: I1205 10:28:00.980203 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:00Z","lastTransitionTime":"2025-12-05T10:28:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.030133 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.030207 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:01 crc kubenswrapper[4796]: E1205 10:28:01.030255 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.030213 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:01 crc kubenswrapper[4796]: E1205 10:28:01.030360 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:01 crc kubenswrapper[4796]: E1205 10:28:01.030473 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.082547 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.082583 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.082592 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.082605 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.082613 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:01Z","lastTransitionTime":"2025-12-05T10:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.172984 4796 generic.go:334] "Generic (PLEG): container finished" podID="e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7" containerID="133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb" exitCode=0 Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.173023 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" event={"ID":"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7","Type":"ContainerDied","Data":"133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb"} Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.183903 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.183934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.183943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.183955 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.183963 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:01Z","lastTransitionTime":"2025-12-05T10:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.186263 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:01Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.194974 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:01Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.208262 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:01Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.218295 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:01Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.225123 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:01Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.233612 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:01Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.240472 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:01Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.248931 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:01Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.256628 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:01Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.264362 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:01Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.276139 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:01Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.284090 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:01Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.285540 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.285583 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.285592 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.285607 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.285616 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:01Z","lastTransitionTime":"2025-12-05T10:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.291899 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:01Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.300384 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:01Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.308803 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:01Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.387922 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.387958 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.387966 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.387981 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.387990 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:01Z","lastTransitionTime":"2025-12-05T10:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.490136 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.490194 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.490204 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.490222 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.490233 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:01Z","lastTransitionTime":"2025-12-05T10:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.592596 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.592628 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.592636 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.592646 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.592654 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:01Z","lastTransitionTime":"2025-12-05T10:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.694600 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.694641 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.694651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.694665 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.694673 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:01Z","lastTransitionTime":"2025-12-05T10:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.796517 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.796560 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.796571 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.796584 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.796593 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:01Z","lastTransitionTime":"2025-12-05T10:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.898817 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.898973 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.898983 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.898996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:01 crc kubenswrapper[4796]: I1205 10:28:01.899004 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:01Z","lastTransitionTime":"2025-12-05T10:28:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.001325 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.001373 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.001382 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.001395 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.001403 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:02Z","lastTransitionTime":"2025-12-05T10:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.103018 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.103060 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.103070 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.103084 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.103108 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:02Z","lastTransitionTime":"2025-12-05T10:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.178276 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerStarted","Data":"9f72f7420b3262337a1743932202e3b892441969e119f6225e4e9f968666d89e"} Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.178552 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.180851 4796 generic.go:334] "Generic (PLEG): container finished" podID="e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7" containerID="df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15" exitCode=0 Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.180881 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" event={"ID":"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7","Type":"ContainerDied","Data":"df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15"} Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.187142 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.199846 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.200233 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.205227 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.205262 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.205273 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.205286 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.205295 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:02Z","lastTransitionTime":"2025-12-05T10:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.209858 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.219872 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.229185 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.238986 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.247318 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.256422 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.263916 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.277218 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f72f7420b3262337a1743932202e3b892441969e119f6225e4e9f968666d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.284862 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.292783 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.301201 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.306939 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.306970 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.306979 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.306992 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.307007 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:02Z","lastTransitionTime":"2025-12-05T10:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.309815 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.319260 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.327444 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.334336 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.342617 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.351952 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.358096 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.371076 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.379556 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.387946 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.396076 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.404651 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.411298 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.411327 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.411338 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.411470 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.411519 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:02Z","lastTransitionTime":"2025-12-05T10:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.415264 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.423175 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.430714 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.437604 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.449962 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f72f7420b3262337a1743932202e3b892441969e119f6225e4e9f968666d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:02Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.513831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.513855 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.513863 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.513876 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.513885 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:02Z","lastTransitionTime":"2025-12-05T10:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.615515 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.615564 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.615573 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.615588 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.615597 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:02Z","lastTransitionTime":"2025-12-05T10:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.717108 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.717129 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.717136 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.717146 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.717153 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:02Z","lastTransitionTime":"2025-12-05T10:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.819119 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.819152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.819164 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.819176 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.819185 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:02Z","lastTransitionTime":"2025-12-05T10:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.920583 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.920607 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.920615 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.920624 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:02 crc kubenswrapper[4796]: I1205 10:28:02.920630 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:02Z","lastTransitionTime":"2025-12-05T10:28:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.022045 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.022071 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.022080 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.022089 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.022097 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:03Z","lastTransitionTime":"2025-12-05T10:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.031010 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:03 crc kubenswrapper[4796]: E1205 10:28:03.031085 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.031154 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.031179 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:03 crc kubenswrapper[4796]: E1205 10:28:03.031265 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:03 crc kubenswrapper[4796]: E1205 10:28:03.031324 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.124624 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.124669 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.124678 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.124711 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.124720 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:03Z","lastTransitionTime":"2025-12-05T10:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.186011 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" event={"ID":"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7","Type":"ContainerStarted","Data":"b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518"} Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.186063 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.186454 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.195983 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.202280 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.204650 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.218212 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.226605 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.226633 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.226642 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.226659 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.226668 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:03Z","lastTransitionTime":"2025-12-05T10:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.226610 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.235144 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.244026 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.253042 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.264167 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.273061 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.285167 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f72f7420b3262337a1743932202e3b892441969e119f6225e4e9f968666d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.294945 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.301476 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.308753 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.316113 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.325982 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.328308 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.328343 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.328352 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.328366 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.328376 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:03Z","lastTransitionTime":"2025-12-05T10:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.337224 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.348752 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.362753 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.371063 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.387251 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.400411 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.412843 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.421270 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.430074 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.430108 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.430117 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.430130 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.430138 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:03Z","lastTransitionTime":"2025-12-05T10:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.434922 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.445940 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.454891 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.463287 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.470642 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.483467 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f72f7420b3262337a1743932202e3b892441969e119f6225e4e9f968666d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.491759 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:03Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.532004 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.532039 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.532049 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.532062 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.532071 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:03Z","lastTransitionTime":"2025-12-05T10:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.634032 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.634070 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.634079 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.634092 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.634100 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:03Z","lastTransitionTime":"2025-12-05T10:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.735824 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.735986 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.735996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.736009 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.736017 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:03Z","lastTransitionTime":"2025-12-05T10:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.837797 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.837834 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.837845 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.837861 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.837869 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:03Z","lastTransitionTime":"2025-12-05T10:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.940127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.940160 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.940168 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.940181 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:03 crc kubenswrapper[4796]: I1205 10:28:03.940191 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:03Z","lastTransitionTime":"2025-12-05T10:28:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.039711 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.041633 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.041656 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.041696 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.041708 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.041716 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:04Z","lastTransitionTime":"2025-12-05T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.043617 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.043655 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.043663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.043676 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.043700 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:04Z","lastTransitionTime":"2025-12-05T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.050472 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: E1205 10:28:04.051838 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.054870 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.054956 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.055024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.055091 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.055152 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:04Z","lastTransitionTime":"2025-12-05T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.058187 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: E1205 10:28:04.063235 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.065903 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.065960 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.065970 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.065906 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.065983 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.065993 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:04Z","lastTransitionTime":"2025-12-05T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:04 crc kubenswrapper[4796]: E1205 10:28:04.074293 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.075462 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.077172 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.077206 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.077233 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.077243 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.077250 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:04Z","lastTransitionTime":"2025-12-05T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.083981 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: E1205 10:28:04.085737 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.088102 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.088149 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.088175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.088188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.088196 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:04Z","lastTransitionTime":"2025-12-05T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.096469 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: E1205 10:28:04.097759 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: E1205 10:28:04.097905 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.110050 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.118096 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.127188 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.136253 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.142878 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.142898 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.142906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.142917 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.142926 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:04Z","lastTransitionTime":"2025-12-05T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.148093 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.161830 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f72f7420b3262337a1743932202e3b892441969e119f6225e4e9f968666d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.171122 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.178632 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.189145 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xvb5x_d158ce1c-6415-4e69-a1fe-862330b25ff3/ovnkube-controller/0.log" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.191406 4796 generic.go:334] "Generic (PLEG): container finished" podID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerID="9f72f7420b3262337a1743932202e3b892441969e119f6225e4e9f968666d89e" exitCode=1 Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.191473 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerDied","Data":"9f72f7420b3262337a1743932202e3b892441969e119f6225e4e9f968666d89e"} Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.192005 4796 scope.go:117] "RemoveContainer" containerID="9f72f7420b3262337a1743932202e3b892441969e119f6225e4e9f968666d89e" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.202155 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.210338 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.219166 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.228831 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.235280 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.245085 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.245115 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.245127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.245143 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.245152 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:04Z","lastTransitionTime":"2025-12-05T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.248970 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.258965 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.291224 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.330975 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.347380 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.347437 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.347448 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.347463 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.347472 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:04Z","lastTransitionTime":"2025-12-05T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.373162 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.415028 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.449663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.449725 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.449738 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.449756 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.449767 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:04Z","lastTransitionTime":"2025-12-05T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.455876 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.491355 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.538289 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.551403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.551453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.551465 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.551479 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.551488 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:04Z","lastTransitionTime":"2025-12-05T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.574226 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9f72f7420b3262337a1743932202e3b892441969e119f6225e4e9f968666d89e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f72f7420b3262337a1743932202e3b892441969e119f6225e4e9f968666d89e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"message\\\":\\\"ersions/factory.go:140\\\\nI1205 10:28:03.833020 6121 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 10:28:03.833174 6121 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 10:28:03.833189 6121 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 10:28:03.833193 6121 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 10:28:03.833202 6121 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 10:28:03.833205 6121 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 10:28:03.833234 6121 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 10:28:03.833243 6121 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 10:28:03.833249 6121 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1205 10:28:03.833246 6121 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 10:28:03.833255 6121 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 10:28:03.833270 6121 factory.go:656] Stopping watch factory\\\\nI1205 10:28:03.833285 6121 ovnkube.go:599] Stopped ovnkube\\\\nI1205 10:28:03.833308 6121 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1205 10:28:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.653542 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.653576 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.653585 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.653602 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.653612 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:04Z","lastTransitionTime":"2025-12-05T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.755378 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.755416 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.755437 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.755451 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.755460 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:04Z","lastTransitionTime":"2025-12-05T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.856932 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.856969 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.856977 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.856999 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.857007 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:04Z","lastTransitionTime":"2025-12-05T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.958832 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.958858 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.958866 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.958876 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:04 crc kubenswrapper[4796]: I1205 10:28:04.958885 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:04Z","lastTransitionTime":"2025-12-05T10:28:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.031076 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.031125 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:05 crc kubenswrapper[4796]: E1205 10:28:05.031200 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.031230 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:05 crc kubenswrapper[4796]: E1205 10:28:05.031344 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:05 crc kubenswrapper[4796]: E1205 10:28:05.031418 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.060818 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.060841 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.060850 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.060865 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.060875 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:05Z","lastTransitionTime":"2025-12-05T10:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.163142 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.163171 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.163179 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.163190 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.163197 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:05Z","lastTransitionTime":"2025-12-05T10:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.194452 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xvb5x_d158ce1c-6415-4e69-a1fe-862330b25ff3/ovnkube-controller/1.log" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.194842 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xvb5x_d158ce1c-6415-4e69-a1fe-862330b25ff3/ovnkube-controller/0.log" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.196773 4796 generic.go:334] "Generic (PLEG): container finished" podID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerID="2f023f074c6ce58f72232ca10295c5b019a1d7cc0efcf53542c68fff57eb7036" exitCode=1 Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.196808 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerDied","Data":"2f023f074c6ce58f72232ca10295c5b019a1d7cc0efcf53542c68fff57eb7036"} Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.196836 4796 scope.go:117] "RemoveContainer" containerID="9f72f7420b3262337a1743932202e3b892441969e119f6225e4e9f968666d89e" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.197252 4796 scope.go:117] "RemoveContainer" containerID="2f023f074c6ce58f72232ca10295c5b019a1d7cc0efcf53542c68fff57eb7036" Dec 05 10:28:05 crc kubenswrapper[4796]: E1205 10:28:05.197448 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.208847 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:05Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.218425 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:05Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.227660 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:05Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.238290 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:05Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.248013 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:05Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.259897 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f023f074c6ce58f72232ca10295c5b019a1d7cc0efcf53542c68fff57eb7036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f72f7420b3262337a1743932202e3b892441969e119f6225e4e9f968666d89e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"message\\\":\\\"ersions/factory.go:140\\\\nI1205 10:28:03.833020 6121 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1205 10:28:03.833174 6121 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1205 10:28:03.833189 6121 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1205 10:28:03.833193 6121 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1205 10:28:03.833202 6121 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1205 10:28:03.833205 6121 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1205 10:28:03.833234 6121 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1205 10:28:03.833243 6121 handler.go:208] Removed *v1.Node event handler 2\\\\nI1205 10:28:03.833249 6121 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1205 10:28:03.833246 6121 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1205 10:28:03.833255 6121 handler.go:208] Removed *v1.Node event handler 7\\\\nI1205 10:28:03.833270 6121 factory.go:656] Stopping watch factory\\\\nI1205 10:28:03.833285 6121 ovnkube.go:599] Stopped ovnkube\\\\nI1205 10:28:03.833308 6121 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1205 10:28:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f023f074c6ce58f72232ca10295c5b019a1d7cc0efcf53542c68fff57eb7036\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"ernal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z]\\\\nI1205 10:28:04.798796 6251 services_controller.go:434] Service openshift-apiserver/api retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{api openshift-apiserver 3b54abf8-b632-44a4-b36d-9f489b41a2d2 4787 0 2025-02-23 05:22:52 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:openshift-apiserver] map[operator.openshift.io/spec-hash:9c74227d7f96d723d980c50373a5e91f08c5893365bfd5a5040449b1b6585a23 service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.5.37,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,Se\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:05Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.264939 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.264969 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.264977 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.264988 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.264995 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:05Z","lastTransitionTime":"2025-12-05T10:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.268304 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:05Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.275483 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:05Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.283619 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:05Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.292523 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:05Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.299453 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:05Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.311766 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:05Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.319372 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:05Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.327038 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:05Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.334267 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:05Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.367115 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.367138 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.367147 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.367158 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.367166 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:05Z","lastTransitionTime":"2025-12-05T10:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.469341 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.469379 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.469387 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.469408 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.469419 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:05Z","lastTransitionTime":"2025-12-05T10:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.571597 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.571626 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.571634 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.571644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.571653 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:05Z","lastTransitionTime":"2025-12-05T10:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.674029 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.674069 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.674079 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.674093 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.674103 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:05Z","lastTransitionTime":"2025-12-05T10:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.776466 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.776554 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.776574 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.776601 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.776615 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:05Z","lastTransitionTime":"2025-12-05T10:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.879054 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.879133 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.879143 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.879156 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.879170 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:05Z","lastTransitionTime":"2025-12-05T10:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.981351 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.981408 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.981421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.981448 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:05 crc kubenswrapper[4796]: I1205 10:28:05.981459 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:05Z","lastTransitionTime":"2025-12-05T10:28:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.083188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.083236 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.083247 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.083262 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.083274 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:06Z","lastTransitionTime":"2025-12-05T10:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.185314 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.185361 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.185375 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.185393 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.185414 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:06Z","lastTransitionTime":"2025-12-05T10:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.201044 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xvb5x_d158ce1c-6415-4e69-a1fe-862330b25ff3/ovnkube-controller/1.log" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.204634 4796 scope.go:117] "RemoveContainer" containerID="2f023f074c6ce58f72232ca10295c5b019a1d7cc0efcf53542c68fff57eb7036" Dec 05 10:28:06 crc kubenswrapper[4796]: E1205 10:28:06.204880 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.215340 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:06Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.224339 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:06Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.238311 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f023f074c6ce58f72232ca10295c5b019a1d7cc0efcf53542c68fff57eb7036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f023f074c6ce58f72232ca10295c5b019a1d7cc0efcf53542c68fff57eb7036\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"ernal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z]\\\\nI1205 10:28:04.798796 6251 services_controller.go:434] Service openshift-apiserver/api retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{api openshift-apiserver 3b54abf8-b632-44a4-b36d-9f489b41a2d2 4787 0 2025-02-23 05:22:52 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:openshift-apiserver] map[operator.openshift.io/spec-hash:9c74227d7f96d723d980c50373a5e91f08c5893365bfd5a5040449b1b6585a23 service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.5.37,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,Se\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:06Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.248259 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:06Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.256308 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:06Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.264805 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:06Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.274049 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:06Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.281047 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:06Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.287218 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.287273 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.287283 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.287301 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.287312 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:06Z","lastTransitionTime":"2025-12-05T10:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.294101 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:06Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.302019 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:06Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.310675 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:06Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.319313 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:06Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.328243 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:06Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.337156 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:06Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.344890 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:06Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.389285 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.389315 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.389324 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.389338 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.389347 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:06Z","lastTransitionTime":"2025-12-05T10:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.492010 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.492042 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.492052 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.492067 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.492077 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:06Z","lastTransitionTime":"2025-12-05T10:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.593852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.593882 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.593891 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.593901 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.593909 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:06Z","lastTransitionTime":"2025-12-05T10:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.695565 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.695592 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.695601 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.695615 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.695625 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:06Z","lastTransitionTime":"2025-12-05T10:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.797737 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.797941 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.797949 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.797962 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.797971 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:06Z","lastTransitionTime":"2025-12-05T10:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.810153 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:28:06 crc kubenswrapper[4796]: E1205 10:28:06.810264 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:28:22.810229245 +0000 UTC m=+49.098334757 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.899931 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.900035 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.900112 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.900189 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.900250 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:06Z","lastTransitionTime":"2025-12-05T10:28:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.911439 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.911485 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.911509 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:06 crc kubenswrapper[4796]: I1205 10:28:06.911528 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:06 crc kubenswrapper[4796]: E1205 10:28:06.911647 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 10:28:06 crc kubenswrapper[4796]: E1205 10:28:06.911670 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 10:28:06 crc kubenswrapper[4796]: E1205 10:28:06.911702 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:28:06 crc kubenswrapper[4796]: E1205 10:28:06.911752 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 10:28:06 crc kubenswrapper[4796]: E1205 10:28:06.911758 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 10:28:22.911740426 +0000 UTC m=+49.199845939 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:28:06 crc kubenswrapper[4796]: E1205 10:28:06.911832 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 10:28:22.911820025 +0000 UTC m=+49.199925538 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 10:28:06 crc kubenswrapper[4796]: E1205 10:28:06.911865 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 10:28:06 crc kubenswrapper[4796]: E1205 10:28:06.911943 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 10:28:22.911925624 +0000 UTC m=+49.200031137 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 10:28:06 crc kubenswrapper[4796]: E1205 10:28:06.912089 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 10:28:06 crc kubenswrapper[4796]: E1205 10:28:06.912164 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 10:28:06 crc kubenswrapper[4796]: E1205 10:28:06.912224 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:28:06 crc kubenswrapper[4796]: E1205 10:28:06.912322 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 10:28:22.91230627 +0000 UTC m=+49.200411783 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.002301 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.002424 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.002503 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.002564 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.002628 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:07Z","lastTransitionTime":"2025-12-05T10:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.030980 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.030989 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.031097 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:07 crc kubenswrapper[4796]: E1205 10:28:07.031214 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:07 crc kubenswrapper[4796]: E1205 10:28:07.031324 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:07 crc kubenswrapper[4796]: E1205 10:28:07.031416 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.104425 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.104468 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.104479 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.104493 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.104502 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:07Z","lastTransitionTime":"2025-12-05T10:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.205893 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.205918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.205927 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.205936 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.205943 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:07Z","lastTransitionTime":"2025-12-05T10:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.307325 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.307351 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.307358 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.307367 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.307374 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:07Z","lastTransitionTime":"2025-12-05T10:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.357874 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms"] Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.358271 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.359586 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.359621 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.369457 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:07Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.376535 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:07Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.386569 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:07Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.398127 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f023f074c6ce58f72232ca10295c5b019a1d7cc0efcf53542c68fff57eb7036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f023f074c6ce58f72232ca10295c5b019a1d7cc0efcf53542c68fff57eb7036\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"ernal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z]\\\\nI1205 10:28:04.798796 6251 services_controller.go:434] Service openshift-apiserver/api retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{api openshift-apiserver 3b54abf8-b632-44a4-b36d-9f489b41a2d2 4787 0 2025-02-23 05:22:52 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:openshift-apiserver] map[operator.openshift.io/spec-hash:9c74227d7f96d723d980c50373a5e91f08c5893365bfd5a5040449b1b6585a23 service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.5.37,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,Se\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:07Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.405945 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:07Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.408930 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.408965 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.408974 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.408988 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.408999 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:07Z","lastTransitionTime":"2025-12-05T10:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.412851 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:07Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.414053 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hsvg\" (UniqueName: \"kubernetes.io/projected/96992bd4-728c-4608-bc6a-df74b8823664-kube-api-access-5hsvg\") pod \"ovnkube-control-plane-749d76644c-2khms\" (UID: \"96992bd4-728c-4608-bc6a-df74b8823664\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.414108 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/96992bd4-728c-4608-bc6a-df74b8823664-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2khms\" (UID: \"96992bd4-728c-4608-bc6a-df74b8823664\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.414151 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/96992bd4-728c-4608-bc6a-df74b8823664-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2khms\" (UID: \"96992bd4-728c-4608-bc6a-df74b8823664\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.414174 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/96992bd4-728c-4608-bc6a-df74b8823664-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2khms\" (UID: \"96992bd4-728c-4608-bc6a-df74b8823664\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.421067 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:07Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.429710 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:07Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.437098 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:07Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.444351 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:07Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.451036 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:07Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.458489 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:07Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.466036 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:07Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.473238 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96992bd4-728c-4608-bc6a-df74b8823664\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2khms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:07Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.485378 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:07Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.493238 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:07Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.510922 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.510970 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.510981 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.510997 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.511006 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:07Z","lastTransitionTime":"2025-12-05T10:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.515232 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/96992bd4-728c-4608-bc6a-df74b8823664-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2khms\" (UID: \"96992bd4-728c-4608-bc6a-df74b8823664\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.515272 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/96992bd4-728c-4608-bc6a-df74b8823664-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2khms\" (UID: \"96992bd4-728c-4608-bc6a-df74b8823664\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.515290 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/96992bd4-728c-4608-bc6a-df74b8823664-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2khms\" (UID: \"96992bd4-728c-4608-bc6a-df74b8823664\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.515306 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hsvg\" (UniqueName: \"kubernetes.io/projected/96992bd4-728c-4608-bc6a-df74b8823664-kube-api-access-5hsvg\") pod \"ovnkube-control-plane-749d76644c-2khms\" (UID: \"96992bd4-728c-4608-bc6a-df74b8823664\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.515856 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/96992bd4-728c-4608-bc6a-df74b8823664-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2khms\" (UID: \"96992bd4-728c-4608-bc6a-df74b8823664\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.515906 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/96992bd4-728c-4608-bc6a-df74b8823664-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2khms\" (UID: \"96992bd4-728c-4608-bc6a-df74b8823664\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.520109 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/96992bd4-728c-4608-bc6a-df74b8823664-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2khms\" (UID: \"96992bd4-728c-4608-bc6a-df74b8823664\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.527201 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hsvg\" (UniqueName: \"kubernetes.io/projected/96992bd4-728c-4608-bc6a-df74b8823664-kube-api-access-5hsvg\") pod \"ovnkube-control-plane-749d76644c-2khms\" (UID: \"96992bd4-728c-4608-bc6a-df74b8823664\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.613900 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.613932 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.613944 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.613958 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.613966 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:07Z","lastTransitionTime":"2025-12-05T10:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.667363 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" Dec 05 10:28:07 crc kubenswrapper[4796]: W1205 10:28:07.677129 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96992bd4_728c_4608_bc6a_df74b8823664.slice/crio-575e22f772a6e140dd7d8ed5f07c36541b8cc083acab8d380eaeda16dd237756 WatchSource:0}: Error finding container 575e22f772a6e140dd7d8ed5f07c36541b8cc083acab8d380eaeda16dd237756: Status 404 returned error can't find the container with id 575e22f772a6e140dd7d8ed5f07c36541b8cc083acab8d380eaeda16dd237756 Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.716416 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.718986 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.719067 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.719107 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.719125 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:07Z","lastTransitionTime":"2025-12-05T10:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.821113 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.821147 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.821157 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.821173 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.821182 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:07Z","lastTransitionTime":"2025-12-05T10:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.924292 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.924332 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.924342 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.924357 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:07 crc kubenswrapper[4796]: I1205 10:28:07.924371 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:07Z","lastTransitionTime":"2025-12-05T10:28:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.027082 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.027135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.027144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.027162 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.027176 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:08Z","lastTransitionTime":"2025-12-05T10:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.129698 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.129747 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.129759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.129776 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.129787 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:08Z","lastTransitionTime":"2025-12-05T10:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.210046 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" event={"ID":"96992bd4-728c-4608-bc6a-df74b8823664","Type":"ContainerStarted","Data":"8128986b7933bc3672b05ee1df32a49cab9785765d3ece1169d5bc39977af6a3"} Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.210103 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" event={"ID":"96992bd4-728c-4608-bc6a-df74b8823664","Type":"ContainerStarted","Data":"de894806542062173ba9c06c518d22b16683b49ecd18fa80efe7d4132c719480"} Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.210114 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" event={"ID":"96992bd4-728c-4608-bc6a-df74b8823664","Type":"ContainerStarted","Data":"575e22f772a6e140dd7d8ed5f07c36541b8cc083acab8d380eaeda16dd237756"} Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.221332 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:08Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.231227 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.231260 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.231269 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.231281 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.231292 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:08Z","lastTransitionTime":"2025-12-05T10:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.232483 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:08Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.242130 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96992bd4-728c-4608-bc6a-df74b8823664\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de894806542062173ba9c06c518d22b16683b49ecd18fa80efe7d4132c719480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8128986b7933bc3672b05ee1df32a49cab9785765d3ece1169d5bc39977af6a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2khms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:08Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.274838 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:08Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.290951 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:08Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.304794 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:08Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.315643 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:08Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.324989 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:08Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.333736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.333771 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.333779 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.333797 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.333807 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:08Z","lastTransitionTime":"2025-12-05T10:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.334951 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:08Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.343840 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:08Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.357439 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f023f074c6ce58f72232ca10295c5b019a1d7cc0efcf53542c68fff57eb7036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f023f074c6ce58f72232ca10295c5b019a1d7cc0efcf53542c68fff57eb7036\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"ernal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z]\\\\nI1205 10:28:04.798796 6251 services_controller.go:434] Service openshift-apiserver/api retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{api openshift-apiserver 3b54abf8-b632-44a4-b36d-9f489b41a2d2 4787 0 2025-02-23 05:22:52 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:openshift-apiserver] map[operator.openshift.io/spec-hash:9c74227d7f96d723d980c50373a5e91f08c5893365bfd5a5040449b1b6585a23 service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.5.37,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,Se\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:08Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.367875 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:08Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.375266 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:08Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.383096 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:08Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.392075 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:08Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.399284 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:08Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.435640 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.435672 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.435696 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.435710 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.435719 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:08Z","lastTransitionTime":"2025-12-05T10:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.538105 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.538140 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.538148 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.538161 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.538172 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:08Z","lastTransitionTime":"2025-12-05T10:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.639479 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.639512 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.639521 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.639533 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.639541 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:08Z","lastTransitionTime":"2025-12-05T10:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.741598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.741627 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.741636 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.741649 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.741659 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:08Z","lastTransitionTime":"2025-12-05T10:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.844056 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.844092 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.844101 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.844113 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.844123 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:08Z","lastTransitionTime":"2025-12-05T10:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.946901 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.946948 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.946957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.946972 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.946983 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:08Z","lastTransitionTime":"2025-12-05T10:28:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.967069 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.983993 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:08Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:08 crc kubenswrapper[4796]: I1205 10:28:08.992802 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:08Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.004601 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.013759 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.022716 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96992bd4-728c-4608-bc6a-df74b8823664\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de894806542062173ba9c06c518d22b16683b49ecd18fa80efe7d4132c719480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8128986b7933bc3672b05ee1df32a49cab9785765d3ece1169d5bc39977af6a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2khms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.030711 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.030779 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.030869 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:09 crc kubenswrapper[4796]: E1205 10:28:09.030975 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:09 crc kubenswrapper[4796]: E1205 10:28:09.031162 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:09 crc kubenswrapper[4796]: E1205 10:28:09.031321 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.032660 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.042202 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.049333 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.049367 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.049377 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.049391 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.049402 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:09Z","lastTransitionTime":"2025-12-05T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.053442 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.062468 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.070510 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.088175 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f023f074c6ce58f72232ca10295c5b019a1d7cc0efcf53542c68fff57eb7036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f023f074c6ce58f72232ca10295c5b019a1d7cc0efcf53542c68fff57eb7036\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"ernal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z]\\\\nI1205 10:28:04.798796 6251 services_controller.go:434] Service openshift-apiserver/api retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{api openshift-apiserver 3b54abf8-b632-44a4-b36d-9f489b41a2d2 4787 0 2025-02-23 05:22:52 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:openshift-apiserver] map[operator.openshift.io/spec-hash:9c74227d7f96d723d980c50373a5e91f08c5893365bfd5a5040449b1b6585a23 service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.5.37,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,Se\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.097512 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.105592 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.111990 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-sqdfm"] Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.112585 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:09 crc kubenswrapper[4796]: E1205 10:28:09.112654 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.116464 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.129365 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.129498 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gxz4\" (UniqueName: \"kubernetes.io/projected/dcf780ba-edff-45ee-88e9-5b99e4d0e458-kube-api-access-6gxz4\") pod \"network-metrics-daemon-sqdfm\" (UID: \"dcf780ba-edff-45ee-88e9-5b99e4d0e458\") " pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.129596 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs\") pod \"network-metrics-daemon-sqdfm\" (UID: \"dcf780ba-edff-45ee-88e9-5b99e4d0e458\") " pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.137360 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.151671 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.151736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.151751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.151778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.151793 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:09Z","lastTransitionTime":"2025-12-05T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.152593 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f023f074c6ce58f72232ca10295c5b019a1d7cc0efcf53542c68fff57eb7036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f023f074c6ce58f72232ca10295c5b019a1d7cc0efcf53542c68fff57eb7036\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"ernal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z]\\\\nI1205 10:28:04.798796 6251 services_controller.go:434] Service openshift-apiserver/api retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{api openshift-apiserver 3b54abf8-b632-44a4-b36d-9f489b41a2d2 4787 0 2025-02-23 05:22:52 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:openshift-apiserver] map[operator.openshift.io/spec-hash:9c74227d7f96d723d980c50373a5e91f08c5893365bfd5a5040449b1b6585a23 service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.5.37,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,Se\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.162624 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.172756 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.182298 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.193263 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.201069 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.210224 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.217465 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.226140 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.230257 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs\") pod \"network-metrics-daemon-sqdfm\" (UID: \"dcf780ba-edff-45ee-88e9-5b99e4d0e458\") " pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.230390 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gxz4\" (UniqueName: \"kubernetes.io/projected/dcf780ba-edff-45ee-88e9-5b99e4d0e458-kube-api-access-6gxz4\") pod \"network-metrics-daemon-sqdfm\" (UID: \"dcf780ba-edff-45ee-88e9-5b99e4d0e458\") " pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:09 crc kubenswrapper[4796]: E1205 10:28:09.230488 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 10:28:09 crc kubenswrapper[4796]: E1205 10:28:09.230572 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs podName:dcf780ba-edff-45ee-88e9-5b99e4d0e458 nodeName:}" failed. No retries permitted until 2025-12-05 10:28:09.73054708 +0000 UTC m=+36.018652593 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs") pod "network-metrics-daemon-sqdfm" (UID: "dcf780ba-edff-45ee-88e9-5b99e4d0e458") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.235210 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.243489 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96992bd4-728c-4608-bc6a-df74b8823664\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de894806542062173ba9c06c518d22b16683b49ecd18fa80efe7d4132c719480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8128986b7933bc3672b05ee1df32a49cab9785765d3ece1169d5bc39977af6a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2khms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.244108 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gxz4\" (UniqueName: \"kubernetes.io/projected/dcf780ba-edff-45ee-88e9-5b99e4d0e458-kube-api-access-6gxz4\") pod \"network-metrics-daemon-sqdfm\" (UID: \"dcf780ba-edff-45ee-88e9-5b99e4d0e458\") " pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.254232 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.254263 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.254274 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.254290 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.254301 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:09Z","lastTransitionTime":"2025-12-05T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.257912 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.268570 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.278260 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.287337 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.294891 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sqdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcf780ba-edff-45ee-88e9-5b99e4d0e458\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sqdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.306148 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:09Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.356166 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.356200 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.356210 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.356222 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.356231 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:09Z","lastTransitionTime":"2025-12-05T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.457891 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.458041 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.458050 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.458063 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.458072 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:09Z","lastTransitionTime":"2025-12-05T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.561170 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.561216 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.561229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.561247 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.561258 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:09Z","lastTransitionTime":"2025-12-05T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.663484 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.663520 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.663529 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.663542 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.663552 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:09Z","lastTransitionTime":"2025-12-05T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.736407 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs\") pod \"network-metrics-daemon-sqdfm\" (UID: \"dcf780ba-edff-45ee-88e9-5b99e4d0e458\") " pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:09 crc kubenswrapper[4796]: E1205 10:28:09.736570 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 10:28:09 crc kubenswrapper[4796]: E1205 10:28:09.736648 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs podName:dcf780ba-edff-45ee-88e9-5b99e4d0e458 nodeName:}" failed. No retries permitted until 2025-12-05 10:28:10.736631388 +0000 UTC m=+37.024736911 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs") pod "network-metrics-daemon-sqdfm" (UID: "dcf780ba-edff-45ee-88e9-5b99e4d0e458") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.765172 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.765193 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.765203 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.765214 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.765221 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:09Z","lastTransitionTime":"2025-12-05T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.867609 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.867643 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.867651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.867664 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.867673 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:09Z","lastTransitionTime":"2025-12-05T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.969460 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.969493 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.969508 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.969521 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:09 crc kubenswrapper[4796]: I1205 10:28:09.969530 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:09Z","lastTransitionTime":"2025-12-05T10:28:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.071329 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.071360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.071369 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.071381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.071390 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:10Z","lastTransitionTime":"2025-12-05T10:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.173494 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.173557 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.173571 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.173593 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.173606 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:10Z","lastTransitionTime":"2025-12-05T10:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.280385 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.280428 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.280449 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.280465 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.280476 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:10Z","lastTransitionTime":"2025-12-05T10:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.383171 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.383218 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.383229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.383246 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.383260 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:10Z","lastTransitionTime":"2025-12-05T10:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.485901 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.485946 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.485957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.485974 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.485985 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:10Z","lastTransitionTime":"2025-12-05T10:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.588374 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.588408 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.588417 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.588429 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.588452 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:10Z","lastTransitionTime":"2025-12-05T10:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.690344 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.690385 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.690395 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.690410 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.690423 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:10Z","lastTransitionTime":"2025-12-05T10:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.745228 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs\") pod \"network-metrics-daemon-sqdfm\" (UID: \"dcf780ba-edff-45ee-88e9-5b99e4d0e458\") " pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:10 crc kubenswrapper[4796]: E1205 10:28:10.745373 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 10:28:10 crc kubenswrapper[4796]: E1205 10:28:10.745445 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs podName:dcf780ba-edff-45ee-88e9-5b99e4d0e458 nodeName:}" failed. No retries permitted until 2025-12-05 10:28:12.745419494 +0000 UTC m=+39.033525007 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs") pod "network-metrics-daemon-sqdfm" (UID: "dcf780ba-edff-45ee-88e9-5b99e4d0e458") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.792309 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.792345 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.792356 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.792371 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.792381 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:10Z","lastTransitionTime":"2025-12-05T10:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.894638 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.894695 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.894706 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.894724 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.894737 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:10Z","lastTransitionTime":"2025-12-05T10:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.996956 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.996991 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.997001 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.997014 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:10 crc kubenswrapper[4796]: I1205 10:28:10.997022 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:10Z","lastTransitionTime":"2025-12-05T10:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.030652 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.030665 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.030743 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.030722 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:11 crc kubenswrapper[4796]: E1205 10:28:11.030789 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:11 crc kubenswrapper[4796]: E1205 10:28:11.030860 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:11 crc kubenswrapper[4796]: E1205 10:28:11.030930 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:11 crc kubenswrapper[4796]: E1205 10:28:11.031088 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.099543 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.099584 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.099596 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.099609 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.099619 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:11Z","lastTransitionTime":"2025-12-05T10:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.202049 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.202097 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.202107 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.202122 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.202141 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:11Z","lastTransitionTime":"2025-12-05T10:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.304366 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.304407 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.304417 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.304443 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.304458 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:11Z","lastTransitionTime":"2025-12-05T10:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.406200 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.406244 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.406256 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.406274 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.406293 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:11Z","lastTransitionTime":"2025-12-05T10:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.509106 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.509172 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.509183 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.509207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.509218 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:11Z","lastTransitionTime":"2025-12-05T10:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.610663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.610720 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.610730 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.610744 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.610756 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:11Z","lastTransitionTime":"2025-12-05T10:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.713199 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.713247 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.713258 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.713270 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.713278 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:11Z","lastTransitionTime":"2025-12-05T10:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.814810 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.814852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.814862 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.814875 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.814889 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:11Z","lastTransitionTime":"2025-12-05T10:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.917014 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.917044 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.917052 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.917064 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:11 crc kubenswrapper[4796]: I1205 10:28:11.917071 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:11Z","lastTransitionTime":"2025-12-05T10:28:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.019059 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.019101 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.019119 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.019133 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.019144 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:12Z","lastTransitionTime":"2025-12-05T10:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.120705 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.120753 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.120767 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.120791 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.120805 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:12Z","lastTransitionTime":"2025-12-05T10:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.222299 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.222352 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.222362 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.222381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.222396 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:12Z","lastTransitionTime":"2025-12-05T10:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.324657 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.324705 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.324715 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.324726 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.324735 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:12Z","lastTransitionTime":"2025-12-05T10:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.426764 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.426950 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.427023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.427094 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.427148 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:12Z","lastTransitionTime":"2025-12-05T10:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.529066 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.529229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.529307 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.529389 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.529485 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:12Z","lastTransitionTime":"2025-12-05T10:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.630790 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.630922 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.630988 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.631058 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.631124 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:12Z","lastTransitionTime":"2025-12-05T10:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.733484 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.733542 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.733552 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.733568 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.733580 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:12Z","lastTransitionTime":"2025-12-05T10:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.763052 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs\") pod \"network-metrics-daemon-sqdfm\" (UID: \"dcf780ba-edff-45ee-88e9-5b99e4d0e458\") " pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:12 crc kubenswrapper[4796]: E1205 10:28:12.763239 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 10:28:12 crc kubenswrapper[4796]: E1205 10:28:12.763318 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs podName:dcf780ba-edff-45ee-88e9-5b99e4d0e458 nodeName:}" failed. No retries permitted until 2025-12-05 10:28:16.763298968 +0000 UTC m=+43.051404481 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs") pod "network-metrics-daemon-sqdfm" (UID: "dcf780ba-edff-45ee-88e9-5b99e4d0e458") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.836257 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.836293 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.836303 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.836317 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.836328 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:12Z","lastTransitionTime":"2025-12-05T10:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.938200 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.938226 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.938235 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.938259 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:12 crc kubenswrapper[4796]: I1205 10:28:12.938267 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:12Z","lastTransitionTime":"2025-12-05T10:28:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.030749 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.030788 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.030751 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.031007 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:13 crc kubenswrapper[4796]: E1205 10:28:13.031205 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:13 crc kubenswrapper[4796]: E1205 10:28:13.031417 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:13 crc kubenswrapper[4796]: E1205 10:28:13.031644 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:13 crc kubenswrapper[4796]: E1205 10:28:13.031787 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.040630 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.040759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.040837 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.040904 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.040967 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:13Z","lastTransitionTime":"2025-12-05T10:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.143431 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.143470 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.143479 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.143491 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.143502 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:13Z","lastTransitionTime":"2025-12-05T10:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.246008 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.246035 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.246044 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.246056 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.246064 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:13Z","lastTransitionTime":"2025-12-05T10:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.349233 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.349281 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.349294 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.349319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.349332 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:13Z","lastTransitionTime":"2025-12-05T10:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.451753 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.451804 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.451817 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.451837 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.451851 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:13Z","lastTransitionTime":"2025-12-05T10:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.554533 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.554578 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.554590 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.554606 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.554619 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:13Z","lastTransitionTime":"2025-12-05T10:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.656809 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.656869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.656881 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.656904 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.656918 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:13Z","lastTransitionTime":"2025-12-05T10:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.758977 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.759035 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.759047 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.759063 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.759078 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:13Z","lastTransitionTime":"2025-12-05T10:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.861134 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.861174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.861185 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.861201 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.861211 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:13Z","lastTransitionTime":"2025-12-05T10:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.964366 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.964411 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.964426 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.964452 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:13 crc kubenswrapper[4796]: I1205 10:28:13.964463 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:13Z","lastTransitionTime":"2025-12-05T10:28:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.043220 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.052090 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96992bd4-728c-4608-bc6a-df74b8823664\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de894806542062173ba9c06c518d22b16683b49ecd18fa80efe7d4132c719480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8128986b7933bc3672b05ee1df32a49cab9785765d3ece1169d5bc39977af6a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2khms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.067320 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.067347 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.067357 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.067375 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.067385 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:14Z","lastTransitionTime":"2025-12-05T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.068112 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.079169 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.089070 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.097974 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sqdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcf780ba-edff-45ee-88e9-5b99e4d0e458\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sqdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.108375 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.117155 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.126384 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.141263 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.149837 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.163810 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f023f074c6ce58f72232ca10295c5b019a1d7cc0efcf53542c68fff57eb7036\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f023f074c6ce58f72232ca10295c5b019a1d7cc0efcf53542c68fff57eb7036\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"ernal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z]\\\\nI1205 10:28:04.798796 6251 services_controller.go:434] Service openshift-apiserver/api retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{api openshift-apiserver 3b54abf8-b632-44a4-b36d-9f489b41a2d2 4787 0 2025-02-23 05:22:52 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:openshift-apiserver] map[operator.openshift.io/spec-hash:9c74227d7f96d723d980c50373a5e91f08c5893365bfd5a5040449b1b6585a23 service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.5.37,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,Se\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.169094 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.169121 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.169131 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.169149 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.169162 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:14Z","lastTransitionTime":"2025-12-05T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.173425 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.181533 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.188599 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.197092 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.207946 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.271411 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.271435 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.271454 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.271466 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.271475 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:14Z","lastTransitionTime":"2025-12-05T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.293957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.294000 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.294013 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.294030 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.294041 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:14Z","lastTransitionTime":"2025-12-05T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:14 crc kubenswrapper[4796]: E1205 10:28:14.302025 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.304901 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.304931 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.304940 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.304950 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.304958 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:14Z","lastTransitionTime":"2025-12-05T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:14 crc kubenswrapper[4796]: E1205 10:28:14.313432 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.316268 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.316296 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.316305 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.316316 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.316341 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:14Z","lastTransitionTime":"2025-12-05T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:14 crc kubenswrapper[4796]: E1205 10:28:14.324046 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.326393 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.326420 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.326455 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.326468 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.326476 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:14Z","lastTransitionTime":"2025-12-05T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:14 crc kubenswrapper[4796]: E1205 10:28:14.334627 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.336807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.336843 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.336853 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.336870 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.336881 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:14Z","lastTransitionTime":"2025-12-05T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:14 crc kubenswrapper[4796]: E1205 10:28:14.346333 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:14 crc kubenswrapper[4796]: E1205 10:28:14.346449 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.374233 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.374289 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.374318 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.374334 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.374342 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:14Z","lastTransitionTime":"2025-12-05T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.476094 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.476147 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.476177 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.476194 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.476202 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:14Z","lastTransitionTime":"2025-12-05T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.577566 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.577595 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.577609 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.577644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.577655 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:14Z","lastTransitionTime":"2025-12-05T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.679261 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.679326 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.679337 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.679349 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.679357 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:14Z","lastTransitionTime":"2025-12-05T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.781636 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.781716 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.781735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.781797 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.781812 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:14Z","lastTransitionTime":"2025-12-05T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.884251 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.884299 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.884309 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.884329 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.884343 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:14Z","lastTransitionTime":"2025-12-05T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.986111 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.986145 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.986155 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.986171 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:14 crc kubenswrapper[4796]: I1205 10:28:14.986181 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:14Z","lastTransitionTime":"2025-12-05T10:28:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.030516 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.030533 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.030536 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.030565 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:15 crc kubenswrapper[4796]: E1205 10:28:15.030633 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:15 crc kubenswrapper[4796]: E1205 10:28:15.030728 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:15 crc kubenswrapper[4796]: E1205 10:28:15.030780 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:15 crc kubenswrapper[4796]: E1205 10:28:15.030940 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.087507 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.087538 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.087546 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.087557 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.087566 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:15Z","lastTransitionTime":"2025-12-05T10:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.189787 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.189837 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.189850 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.189866 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.189878 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:15Z","lastTransitionTime":"2025-12-05T10:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.291854 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.291916 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.291949 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.291966 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.291978 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:15Z","lastTransitionTime":"2025-12-05T10:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.394049 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.394100 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.394111 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.394127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.394139 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:15Z","lastTransitionTime":"2025-12-05T10:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.497269 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.497321 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.497332 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.497354 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.497368 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:15Z","lastTransitionTime":"2025-12-05T10:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.600560 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.600622 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.600633 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.600659 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.600675 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:15Z","lastTransitionTime":"2025-12-05T10:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.703479 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.703518 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.703527 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.703545 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.703554 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:15Z","lastTransitionTime":"2025-12-05T10:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.805632 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.805741 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.805751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.805768 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.805781 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:15Z","lastTransitionTime":"2025-12-05T10:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.907925 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.907956 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.907965 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.907980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:15 crc kubenswrapper[4796]: I1205 10:28:15.907992 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:15Z","lastTransitionTime":"2025-12-05T10:28:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.009582 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.009630 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.009642 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.009658 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.009667 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:16Z","lastTransitionTime":"2025-12-05T10:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.111239 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.111276 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.111285 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.111296 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.111309 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:16Z","lastTransitionTime":"2025-12-05T10:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.214070 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.214125 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.214134 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.214157 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.214168 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:16Z","lastTransitionTime":"2025-12-05T10:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.316977 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.317022 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.317033 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.317055 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.317067 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:16Z","lastTransitionTime":"2025-12-05T10:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.419077 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.419503 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.419581 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.419652 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.419766 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:16Z","lastTransitionTime":"2025-12-05T10:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.521815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.521859 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.521869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.521883 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.521894 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:16Z","lastTransitionTime":"2025-12-05T10:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.623888 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.623931 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.623940 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.623954 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.623965 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:16Z","lastTransitionTime":"2025-12-05T10:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.726596 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.726661 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.726678 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.726746 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.726772 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:16Z","lastTransitionTime":"2025-12-05T10:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.800205 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs\") pod \"network-metrics-daemon-sqdfm\" (UID: \"dcf780ba-edff-45ee-88e9-5b99e4d0e458\") " pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:16 crc kubenswrapper[4796]: E1205 10:28:16.800364 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 10:28:16 crc kubenswrapper[4796]: E1205 10:28:16.800424 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs podName:dcf780ba-edff-45ee-88e9-5b99e4d0e458 nodeName:}" failed. No retries permitted until 2025-12-05 10:28:24.800408948 +0000 UTC m=+51.088514461 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs") pod "network-metrics-daemon-sqdfm" (UID: "dcf780ba-edff-45ee-88e9-5b99e4d0e458") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.828487 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.828591 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.828665 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.828773 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.828878 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:16Z","lastTransitionTime":"2025-12-05T10:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.931239 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.931267 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.931275 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.931287 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:16 crc kubenswrapper[4796]: I1205 10:28:16.931294 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:16Z","lastTransitionTime":"2025-12-05T10:28:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.031107 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:17 crc kubenswrapper[4796]: E1205 10:28:17.031249 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.031785 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.031797 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.031875 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:17 crc kubenswrapper[4796]: E1205 10:28:17.031986 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:17 crc kubenswrapper[4796]: E1205 10:28:17.031897 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:17 crc kubenswrapper[4796]: E1205 10:28:17.032113 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.033148 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.033180 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.033189 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.033200 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.033208 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:17Z","lastTransitionTime":"2025-12-05T10:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.135573 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.135617 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.135633 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.135648 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.135659 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:17Z","lastTransitionTime":"2025-12-05T10:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.237366 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.237423 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.237435 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.237471 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.237486 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:17Z","lastTransitionTime":"2025-12-05T10:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.339605 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.339656 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.339665 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.339704 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.339717 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:17Z","lastTransitionTime":"2025-12-05T10:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.441246 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.442174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.442258 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.442329 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.442389 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:17Z","lastTransitionTime":"2025-12-05T10:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.544728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.544761 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.544769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.544780 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.544787 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:17Z","lastTransitionTime":"2025-12-05T10:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.647251 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.647296 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.647306 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.647323 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.647333 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:17Z","lastTransitionTime":"2025-12-05T10:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.749182 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.749332 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.749420 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.749529 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.749606 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:17Z","lastTransitionTime":"2025-12-05T10:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.851519 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.851737 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.851858 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.851948 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.852035 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:17Z","lastTransitionTime":"2025-12-05T10:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.954018 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.954068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.954081 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.954097 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:17 crc kubenswrapper[4796]: I1205 10:28:17.954107 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:17Z","lastTransitionTime":"2025-12-05T10:28:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.030879 4796 scope.go:117] "RemoveContainer" containerID="2f023f074c6ce58f72232ca10295c5b019a1d7cc0efcf53542c68fff57eb7036" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.055985 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.056106 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.056167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.056238 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.056310 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:18Z","lastTransitionTime":"2025-12-05T10:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.158650 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.158675 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.158702 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.158720 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.158734 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:18Z","lastTransitionTime":"2025-12-05T10:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.240002 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xvb5x_d158ce1c-6415-4e69-a1fe-862330b25ff3/ovnkube-controller/1.log" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.242942 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerStarted","Data":"a82596965ccaab3623a597a7d47a047ff2942ff84669ec020a3a2fda0464cb7a"} Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.243097 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.253108 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.261028 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.261067 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.261077 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.261094 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.261105 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:18Z","lastTransitionTime":"2025-12-05T10:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.262418 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.276357 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.291463 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.301993 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.315702 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96992bd4-728c-4608-bc6a-df74b8823664\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de894806542062173ba9c06c518d22b16683b49ecd18fa80efe7d4132c719480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8128986b7933bc3672b05ee1df32a49cab9785765d3ece1169d5bc39977af6a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2khms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.340012 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.352677 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.363775 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.363814 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.363824 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.363839 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.363850 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:18Z","lastTransitionTime":"2025-12-05T10:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.365202 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.374736 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.384858 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.395085 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.403619 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.411358 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sqdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcf780ba-edff-45ee-88e9-5b99e4d0e458\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sqdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.419460 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.430747 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.443481 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82596965ccaab3623a597a7d47a047ff2942ff84669ec020a3a2fda0464cb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f023f074c6ce58f72232ca10295c5b019a1d7cc0efcf53542c68fff57eb7036\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"ernal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z]\\\\nI1205 10:28:04.798796 6251 services_controller.go:434] Service openshift-apiserver/api retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{api openshift-apiserver 3b54abf8-b632-44a4-b36d-9f489b41a2d2 4787 0 2025-02-23 05:22:52 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:openshift-apiserver] map[operator.openshift.io/spec-hash:9c74227d7f96d723d980c50373a5e91f08c5893365bfd5a5040449b1b6585a23 service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.5.37,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,Se\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.466472 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.466505 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.466518 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.466531 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.466540 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:18Z","lastTransitionTime":"2025-12-05T10:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.570184 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.570538 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.570549 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.570562 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.570570 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:18Z","lastTransitionTime":"2025-12-05T10:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.672313 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.672341 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.672351 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.672363 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.672372 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:18Z","lastTransitionTime":"2025-12-05T10:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.775363 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.775421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.775439 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.775473 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.775487 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:18Z","lastTransitionTime":"2025-12-05T10:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.878098 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.878153 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.878164 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.878182 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.878193 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:18Z","lastTransitionTime":"2025-12-05T10:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.980521 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.980574 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.980584 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.980602 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:18 crc kubenswrapper[4796]: I1205 10:28:18.980621 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:18Z","lastTransitionTime":"2025-12-05T10:28:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.030408 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.030540 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:19 crc kubenswrapper[4796]: E1205 10:28:19.030579 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.030430 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:19 crc kubenswrapper[4796]: E1205 10:28:19.030740 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:19 crc kubenswrapper[4796]: E1205 10:28:19.030799 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.030850 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:19 crc kubenswrapper[4796]: E1205 10:28:19.030905 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.083100 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.083154 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.083170 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.083203 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.083232 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:19Z","lastTransitionTime":"2025-12-05T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.185175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.185214 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.185222 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.185236 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.185246 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:19Z","lastTransitionTime":"2025-12-05T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.247388 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xvb5x_d158ce1c-6415-4e69-a1fe-862330b25ff3/ovnkube-controller/2.log" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.248012 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xvb5x_d158ce1c-6415-4e69-a1fe-862330b25ff3/ovnkube-controller/1.log" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.251471 4796 generic.go:334] "Generic (PLEG): container finished" podID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerID="a82596965ccaab3623a597a7d47a047ff2942ff84669ec020a3a2fda0464cb7a" exitCode=1 Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.251510 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerDied","Data":"a82596965ccaab3623a597a7d47a047ff2942ff84669ec020a3a2fda0464cb7a"} Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.251548 4796 scope.go:117] "RemoveContainer" containerID="2f023f074c6ce58f72232ca10295c5b019a1d7cc0efcf53542c68fff57eb7036" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.252245 4796 scope.go:117] "RemoveContainer" containerID="a82596965ccaab3623a597a7d47a047ff2942ff84669ec020a3a2fda0464cb7a" Dec 05 10:28:19 crc kubenswrapper[4796]: E1205 10:28:19.252389 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.263771 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:19Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.272552 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:19Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.284784 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82596965ccaab3623a597a7d47a047ff2942ff84669ec020a3a2fda0464cb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f023f074c6ce58f72232ca10295c5b019a1d7cc0efcf53542c68fff57eb7036\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"ernal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z]\\\\nI1205 10:28:04.798796 6251 services_controller.go:434] Service openshift-apiserver/api retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{api openshift-apiserver 3b54abf8-b632-44a4-b36d-9f489b41a2d2 4787 0 2025-02-23 05:22:52 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:openshift-apiserver] map[operator.openshift.io/spec-hash:9c74227d7f96d723d980c50373a5e91f08c5893365bfd5a5040449b1b6585a23 service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.5.37,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,Se\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a82596965ccaab3623a597a7d47a047ff2942ff84669ec020a3a2fda0464cb7a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:18Z\\\",\\\"message\\\":\\\"tart default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z]\\\\nI1205 10:28:18.699418 6479 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:19Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.287068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.287132 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.287151 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.287175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.287197 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:19Z","lastTransitionTime":"2025-12-05T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.293270 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:19Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.300350 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:19Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.309124 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:19Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.319111 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:19Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.327112 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:19Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.339990 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:19Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.348614 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:19Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.357299 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:19Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.365762 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:19Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.372945 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96992bd4-728c-4608-bc6a-df74b8823664\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de894806542062173ba9c06c518d22b16683b49ecd18fa80efe7d4132c719480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8128986b7933bc3672b05ee1df32a49cab9785765d3ece1169d5bc39977af6a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2khms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:19Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.382157 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:19Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.389406 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.389463 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.389473 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.389487 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.389499 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:19Z","lastTransitionTime":"2025-12-05T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.391099 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:19Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.399800 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:19Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.406618 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sqdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcf780ba-edff-45ee-88e9-5b99e4d0e458\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sqdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:19Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.491313 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.491359 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.491371 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.491392 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.491404 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:19Z","lastTransitionTime":"2025-12-05T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.593831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.593888 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.593903 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.593933 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.593956 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:19Z","lastTransitionTime":"2025-12-05T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.696264 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.696305 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.696315 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.696331 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.696343 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:19Z","lastTransitionTime":"2025-12-05T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.798607 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.798644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.798653 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.798666 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.798675 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:19Z","lastTransitionTime":"2025-12-05T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.901484 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.901520 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.901531 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.901546 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:19 crc kubenswrapper[4796]: I1205 10:28:19.901556 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:19Z","lastTransitionTime":"2025-12-05T10:28:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.004115 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.004155 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.004165 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.004181 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.004191 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:20Z","lastTransitionTime":"2025-12-05T10:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.106169 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.106198 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.106206 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.106217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.106225 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:20Z","lastTransitionTime":"2025-12-05T10:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.208235 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.208272 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.208282 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.208293 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.208303 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:20Z","lastTransitionTime":"2025-12-05T10:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.256119 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xvb5x_d158ce1c-6415-4e69-a1fe-862330b25ff3/ovnkube-controller/2.log" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.310119 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.310156 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.310168 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.310185 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.310196 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:20Z","lastTransitionTime":"2025-12-05T10:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.411881 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.411914 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.411923 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.411934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.411942 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:20Z","lastTransitionTime":"2025-12-05T10:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.513914 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.513953 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.513961 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.513974 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.513983 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:20Z","lastTransitionTime":"2025-12-05T10:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.616360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.616397 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.616456 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.616469 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.616477 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:20Z","lastTransitionTime":"2025-12-05T10:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.718738 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.718794 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.718807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.718827 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.718838 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:20Z","lastTransitionTime":"2025-12-05T10:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.821980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.822014 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.822044 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.822057 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.822067 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:20Z","lastTransitionTime":"2025-12-05T10:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.924520 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.924582 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.924592 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.924609 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:20 crc kubenswrapper[4796]: I1205 10:28:20.924640 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:20Z","lastTransitionTime":"2025-12-05T10:28:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.026725 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.026761 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.026773 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.026790 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.026801 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:21Z","lastTransitionTime":"2025-12-05T10:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.030073 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.030135 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.030143 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.030199 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:21 crc kubenswrapper[4796]: E1205 10:28:21.030232 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:21 crc kubenswrapper[4796]: E1205 10:28:21.030303 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:21 crc kubenswrapper[4796]: E1205 10:28:21.030365 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:21 crc kubenswrapper[4796]: E1205 10:28:21.030438 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.128975 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.129016 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.129024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.129039 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.129049 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:21Z","lastTransitionTime":"2025-12-05T10:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.231807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.231848 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.231859 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.231875 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.231886 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:21Z","lastTransitionTime":"2025-12-05T10:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.333948 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.334006 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.334017 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.334044 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.334059 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:21Z","lastTransitionTime":"2025-12-05T10:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.435808 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.435837 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.435845 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.435866 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.435875 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:21Z","lastTransitionTime":"2025-12-05T10:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.537554 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.537596 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.537606 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.537622 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.537633 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:21Z","lastTransitionTime":"2025-12-05T10:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.639526 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.639568 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.639576 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.639589 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.639598 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:21Z","lastTransitionTime":"2025-12-05T10:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.741412 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.741463 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.741472 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.741486 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.741494 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:21Z","lastTransitionTime":"2025-12-05T10:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.843536 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.843564 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.843572 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.843584 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.843592 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:21Z","lastTransitionTime":"2025-12-05T10:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.945975 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.946013 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.946021 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.946228 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:21 crc kubenswrapper[4796]: I1205 10:28:21.946237 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:21Z","lastTransitionTime":"2025-12-05T10:28:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.047876 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.047925 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.047938 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.047949 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.047957 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:22Z","lastTransitionTime":"2025-12-05T10:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.149348 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.149380 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.149387 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.149401 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.149410 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:22Z","lastTransitionTime":"2025-12-05T10:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.252112 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.252155 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.252164 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.252174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.252182 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:22Z","lastTransitionTime":"2025-12-05T10:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.354321 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.354352 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.354360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.354369 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.354376 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:22Z","lastTransitionTime":"2025-12-05T10:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.456025 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.456076 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.456085 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.456099 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.456108 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:22Z","lastTransitionTime":"2025-12-05T10:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.557902 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.557933 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.557942 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.557952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.557960 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:22Z","lastTransitionTime":"2025-12-05T10:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.659299 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.659325 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.659333 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.659341 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.659348 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:22Z","lastTransitionTime":"2025-12-05T10:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.760878 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.760906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.760914 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.760922 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.760930 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:22Z","lastTransitionTime":"2025-12-05T10:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.852307 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:28:22 crc kubenswrapper[4796]: E1205 10:28:22.852532 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:28:54.852510727 +0000 UTC m=+81.140616240 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.862379 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.862405 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.862414 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.862426 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.862433 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:22Z","lastTransitionTime":"2025-12-05T10:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.953538 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.953595 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.953636 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.953671 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:22 crc kubenswrapper[4796]: E1205 10:28:22.953767 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 10:28:22 crc kubenswrapper[4796]: E1205 10:28:22.953793 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 10:28:22 crc kubenswrapper[4796]: E1205 10:28:22.953796 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 10:28:22 crc kubenswrapper[4796]: E1205 10:28:22.953864 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 10:28:22 crc kubenswrapper[4796]: E1205 10:28:22.953892 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 10:28:22 crc kubenswrapper[4796]: E1205 10:28:22.953826 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 10:28:54.95381525 +0000 UTC m=+81.241920763 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 10:28:22 crc kubenswrapper[4796]: E1205 10:28:22.953913 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:28:22 crc kubenswrapper[4796]: E1205 10:28:22.953935 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 10:28:54.953915639 +0000 UTC m=+81.242021172 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 10:28:22 crc kubenswrapper[4796]: E1205 10:28:22.953874 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 10:28:22 crc kubenswrapper[4796]: E1205 10:28:22.953984 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:28:22 crc kubenswrapper[4796]: E1205 10:28:22.953959 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 10:28:54.953946337 +0000 UTC m=+81.242051870 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:28:22 crc kubenswrapper[4796]: E1205 10:28:22.954061 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 10:28:54.954052065 +0000 UTC m=+81.242157578 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.963985 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.964007 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.964015 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.964024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:22 crc kubenswrapper[4796]: I1205 10:28:22.964031 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:22Z","lastTransitionTime":"2025-12-05T10:28:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.030921 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.030942 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:23 crc kubenswrapper[4796]: E1205 10:28:23.031002 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.031122 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:23 crc kubenswrapper[4796]: E1205 10:28:23.031171 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.031356 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:23 crc kubenswrapper[4796]: E1205 10:28:23.031405 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:23 crc kubenswrapper[4796]: E1205 10:28:23.031444 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.065993 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.066013 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.066021 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.066029 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.066055 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:23Z","lastTransitionTime":"2025-12-05T10:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.168078 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.168209 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.168271 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.168337 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.168395 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:23Z","lastTransitionTime":"2025-12-05T10:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.270313 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.270340 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.270348 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.270359 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.270366 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:23Z","lastTransitionTime":"2025-12-05T10:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.372215 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.372263 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.372271 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.372279 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.372287 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:23Z","lastTransitionTime":"2025-12-05T10:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.473480 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.473515 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.473527 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.473540 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.473549 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:23Z","lastTransitionTime":"2025-12-05T10:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.574916 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.574943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.574954 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.574963 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.574970 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:23Z","lastTransitionTime":"2025-12-05T10:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.675930 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.675985 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.675994 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.676005 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.676012 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:23Z","lastTransitionTime":"2025-12-05T10:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.777759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.777847 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.777920 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.777985 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.778047 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:23Z","lastTransitionTime":"2025-12-05T10:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.879660 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.879728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.879739 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.879754 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.879764 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:23Z","lastTransitionTime":"2025-12-05T10:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.981832 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.981854 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.981863 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.981872 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:23 crc kubenswrapper[4796]: I1205 10:28:23.981895 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:23Z","lastTransitionTime":"2025-12-05T10:28:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.041543 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:24Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.054274 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82596965ccaab3623a597a7d47a047ff2942ff84669ec020a3a2fda0464cb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2f023f074c6ce58f72232ca10295c5b019a1d7cc0efcf53542c68fff57eb7036\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:04Z\\\",\\\"message\\\":\\\"ernal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:04Z is after 2025-08-24T17:21:41Z]\\\\nI1205 10:28:04.798796 6251 services_controller.go:434] Service openshift-apiserver/api retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{api openshift-apiserver 3b54abf8-b632-44a4-b36d-9f489b41a2d2 4787 0 2025-02-23 05:22:52 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:openshift-apiserver] map[operator.openshift.io/spec-hash:9c74227d7f96d723d980c50373a5e91f08c5893365bfd5a5040449b1b6585a23 service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{apiserver: true,},ClusterIP:10.217.5.37,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,Se\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a82596965ccaab3623a597a7d47a047ff2942ff84669ec020a3a2fda0464cb7a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:18Z\\\",\\\"message\\\":\\\"tart default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z]\\\\nI1205 10:28:18.699418 6479 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:24Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.066082 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:24Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.083474 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.083514 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.083523 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.083535 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.083545 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:24Z","lastTransitionTime":"2025-12-05T10:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.088871 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:24Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.106855 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:24Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.120644 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:24Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.128134 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:24Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.137163 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:24Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.144915 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:24Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.153272 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:24Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.161328 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:24Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.168508 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96992bd4-728c-4608-bc6a-df74b8823664\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de894806542062173ba9c06c518d22b16683b49ecd18fa80efe7d4132c719480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8128986b7933bc3672b05ee1df32a49cab9785765d3ece1169d5bc39977af6a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2khms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:24Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.181295 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:24Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.185873 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.185902 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.185911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.185924 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.185936 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:24Z","lastTransitionTime":"2025-12-05T10:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.192125 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:24Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.203215 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:24Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.212123 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:24Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.220570 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sqdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcf780ba-edff-45ee-88e9-5b99e4d0e458\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sqdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:24Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.288340 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.288396 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.288409 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.288432 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.288469 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:24Z","lastTransitionTime":"2025-12-05T10:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.390122 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.390150 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.390158 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.390178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.390188 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:24Z","lastTransitionTime":"2025-12-05T10:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.492551 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.492589 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.492598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.492612 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.492623 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:24Z","lastTransitionTime":"2025-12-05T10:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.566782 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.566815 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.566823 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.566833 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.566840 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:24Z","lastTransitionTime":"2025-12-05T10:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:24 crc kubenswrapper[4796]: E1205 10:28:24.577012 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:24Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.579349 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.579381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.579408 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.579420 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.579431 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:24Z","lastTransitionTime":"2025-12-05T10:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:24 crc kubenswrapper[4796]: E1205 10:28:24.588100 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:24Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.590302 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.590334 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.590367 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.590379 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.590388 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:24Z","lastTransitionTime":"2025-12-05T10:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:24 crc kubenswrapper[4796]: E1205 10:28:24.598622 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:24Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.600847 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.600874 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.600883 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.600896 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.600903 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:24Z","lastTransitionTime":"2025-12-05T10:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:24 crc kubenswrapper[4796]: E1205 10:28:24.608548 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:24Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.610929 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.610961 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.610970 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.610982 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.610990 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:24Z","lastTransitionTime":"2025-12-05T10:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:24 crc kubenswrapper[4796]: E1205 10:28:24.618938 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:24Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:24 crc kubenswrapper[4796]: E1205 10:28:24.619036 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.619986 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.620010 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.620019 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.620044 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.620053 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:24Z","lastTransitionTime":"2025-12-05T10:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.721725 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.721761 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.721771 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.721782 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.721790 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:24Z","lastTransitionTime":"2025-12-05T10:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.823761 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.823824 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.823834 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.823849 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.823863 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:24Z","lastTransitionTime":"2025-12-05T10:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.871266 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs\") pod \"network-metrics-daemon-sqdfm\" (UID: \"dcf780ba-edff-45ee-88e9-5b99e4d0e458\") " pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:24 crc kubenswrapper[4796]: E1205 10:28:24.871389 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 10:28:24 crc kubenswrapper[4796]: E1205 10:28:24.871473 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs podName:dcf780ba-edff-45ee-88e9-5b99e4d0e458 nodeName:}" failed. No retries permitted until 2025-12-05 10:28:40.87144182 +0000 UTC m=+67.159547334 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs") pod "network-metrics-daemon-sqdfm" (UID: "dcf780ba-edff-45ee-88e9-5b99e4d0e458") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.925616 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.925645 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.925654 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.925664 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:24 crc kubenswrapper[4796]: I1205 10:28:24.925674 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:24Z","lastTransitionTime":"2025-12-05T10:28:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.028154 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.028191 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.028202 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.028217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.028231 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:25Z","lastTransitionTime":"2025-12-05T10:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.030147 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.030147 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.030226 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:25 crc kubenswrapper[4796]: E1205 10:28:25.030418 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.030580 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:25 crc kubenswrapper[4796]: E1205 10:28:25.030800 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:25 crc kubenswrapper[4796]: E1205 10:28:25.030851 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:25 crc kubenswrapper[4796]: E1205 10:28:25.030901 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.130567 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.130614 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.130623 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.130635 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.130645 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:25Z","lastTransitionTime":"2025-12-05T10:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.233385 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.233832 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.233897 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.234040 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.234121 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:25Z","lastTransitionTime":"2025-12-05T10:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.337763 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.337798 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.337807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.337821 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.337830 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:25Z","lastTransitionTime":"2025-12-05T10:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.439365 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.439415 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.439426 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.439435 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.439442 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:25Z","lastTransitionTime":"2025-12-05T10:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.540798 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.540827 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.540835 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.540847 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.540874 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:25Z","lastTransitionTime":"2025-12-05T10:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.642904 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.642925 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.642933 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.642963 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.642971 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:25Z","lastTransitionTime":"2025-12-05T10:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.744206 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.744264 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.744277 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.744288 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.744297 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:25Z","lastTransitionTime":"2025-12-05T10:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.846718 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.846759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.846767 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.846786 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.846795 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:25Z","lastTransitionTime":"2025-12-05T10:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.906330 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.907057 4796 scope.go:117] "RemoveContainer" containerID="a82596965ccaab3623a597a7d47a047ff2942ff84669ec020a3a2fda0464cb7a" Dec 05 10:28:25 crc kubenswrapper[4796]: E1205 10:28:25.907233 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.917753 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:25Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.927141 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:25Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.935307 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sqdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcf780ba-edff-45ee-88e9-5b99e4d0e458\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sqdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:25Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.944048 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:25Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.949375 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.949412 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.949422 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.949437 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.949447 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:25Z","lastTransitionTime":"2025-12-05T10:28:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.956390 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82596965ccaab3623a597a7d47a047ff2942ff84669ec020a3a2fda0464cb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a82596965ccaab3623a597a7d47a047ff2942ff84669ec020a3a2fda0464cb7a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:18Z\\\",\\\"message\\\":\\\"tart default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z]\\\\nI1205 10:28:18.699418 6479 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:25Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.964445 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:25Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.971532 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:25Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.979360 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:25Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.988434 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:25Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:25 crc kubenswrapper[4796]: I1205 10:28:25.994820 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:25Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.001987 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:26Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.008290 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:26Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.016044 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:26Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.023879 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:26Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.031030 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96992bd4-728c-4608-bc6a-df74b8823664\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de894806542062173ba9c06c518d22b16683b49ecd18fa80efe7d4132c719480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8128986b7933bc3672b05ee1df32a49cab9785765d3ece1169d5bc39977af6a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2khms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:26Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.044198 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:26Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.051727 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.051759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.051769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.051783 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.051792 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:26Z","lastTransitionTime":"2025-12-05T10:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.053738 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:26Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.155403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.155448 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.155474 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.155491 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.155507 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:26Z","lastTransitionTime":"2025-12-05T10:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.257410 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.257466 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.257477 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.257489 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.257498 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:26Z","lastTransitionTime":"2025-12-05T10:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.359534 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.359574 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.359585 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.359598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.359610 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:26Z","lastTransitionTime":"2025-12-05T10:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.461491 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.461529 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.461555 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.461570 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.461578 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:26Z","lastTransitionTime":"2025-12-05T10:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.562899 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.562932 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.562941 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.562954 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.562963 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:26Z","lastTransitionTime":"2025-12-05T10:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.664850 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.664886 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.664896 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.664908 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.664918 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:26Z","lastTransitionTime":"2025-12-05T10:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.766860 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.766892 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.766901 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.766914 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.766923 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:26Z","lastTransitionTime":"2025-12-05T10:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.868978 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.869010 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.869018 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.869030 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.869038 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:26Z","lastTransitionTime":"2025-12-05T10:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.971189 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.971220 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.971228 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.971242 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:26 crc kubenswrapper[4796]: I1205 10:28:26.971251 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:26Z","lastTransitionTime":"2025-12-05T10:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.030730 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.030752 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:27 crc kubenswrapper[4796]: E1205 10:28:27.030812 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.030826 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.030737 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:27 crc kubenswrapper[4796]: E1205 10:28:27.030871 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:27 crc kubenswrapper[4796]: E1205 10:28:27.030950 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:27 crc kubenswrapper[4796]: E1205 10:28:27.031011 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.072804 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.072835 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.072844 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.072856 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.072866 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:27Z","lastTransitionTime":"2025-12-05T10:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.174922 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.174959 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.174967 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.174984 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.174994 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:27Z","lastTransitionTime":"2025-12-05T10:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.276672 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.276720 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.276728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.276739 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.276749 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:27Z","lastTransitionTime":"2025-12-05T10:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.332157 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.338450 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.342317 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:27Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.350786 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:27Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.358796 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96992bd4-728c-4608-bc6a-df74b8823664\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de894806542062173ba9c06c518d22b16683b49ecd18fa80efe7d4132c719480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8128986b7933bc3672b05ee1df32a49cab9785765d3ece1169d5bc39977af6a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2khms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:27Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.371641 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:27Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.379005 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.379038 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.379047 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.379063 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.379075 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:27Z","lastTransitionTime":"2025-12-05T10:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.380760 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:27Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.390213 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:27Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.399408 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:27Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.407864 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sqdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcf780ba-edff-45ee-88e9-5b99e4d0e458\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sqdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:27Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.417368 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:27Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.430530 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82596965ccaab3623a597a7d47a047ff2942ff84669ec020a3a2fda0464cb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a82596965ccaab3623a597a7d47a047ff2942ff84669ec020a3a2fda0464cb7a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:18Z\\\",\\\"message\\\":\\\"tart default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z]\\\\nI1205 10:28:18.699418 6479 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:27Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.439510 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:27Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.447450 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:27Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.456201 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:27Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.466015 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:27Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.472952 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:27Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.481188 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:27Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.481308 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.481342 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.481354 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.481369 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.481379 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:27Z","lastTransitionTime":"2025-12-05T10:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.488635 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:27Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.582828 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.582857 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.582865 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.582878 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.582886 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:27Z","lastTransitionTime":"2025-12-05T10:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.685181 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.685211 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.685219 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.685230 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.685237 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:27Z","lastTransitionTime":"2025-12-05T10:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.786842 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.786902 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.786911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.786923 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.786932 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:27Z","lastTransitionTime":"2025-12-05T10:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.889165 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.889197 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.889207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.889219 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.889228 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:27Z","lastTransitionTime":"2025-12-05T10:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.990545 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.990583 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.990592 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.990603 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:27 crc kubenswrapper[4796]: I1205 10:28:27.990611 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:27Z","lastTransitionTime":"2025-12-05T10:28:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.092590 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.092652 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.092664 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.092678 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.092705 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:28Z","lastTransitionTime":"2025-12-05T10:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.194998 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.195022 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.195032 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.195060 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.195068 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:28Z","lastTransitionTime":"2025-12-05T10:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.296746 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.296897 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.296978 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.297044 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.297095 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:28Z","lastTransitionTime":"2025-12-05T10:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.399497 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.399542 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.399552 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.399574 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.399586 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:28Z","lastTransitionTime":"2025-12-05T10:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.501509 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.501546 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.501556 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.501570 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.501581 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:28Z","lastTransitionTime":"2025-12-05T10:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.604166 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.604213 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.604225 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.604240 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.604252 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:28Z","lastTransitionTime":"2025-12-05T10:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.706314 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.706344 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.706354 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.706371 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.706382 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:28Z","lastTransitionTime":"2025-12-05T10:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.808360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.808413 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.808423 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.808439 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.808452 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:28Z","lastTransitionTime":"2025-12-05T10:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.910732 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.910763 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.910772 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.910790 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:28 crc kubenswrapper[4796]: I1205 10:28:28.910810 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:28Z","lastTransitionTime":"2025-12-05T10:28:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.012864 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.012897 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.012906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.012916 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.012925 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:29Z","lastTransitionTime":"2025-12-05T10:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.030322 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.030378 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.030413 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.030410 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:29 crc kubenswrapper[4796]: E1205 10:28:29.030441 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:29 crc kubenswrapper[4796]: E1205 10:28:29.030533 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:29 crc kubenswrapper[4796]: E1205 10:28:29.030570 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:29 crc kubenswrapper[4796]: E1205 10:28:29.030726 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.114781 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.114814 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.114823 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.114834 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.114842 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:29Z","lastTransitionTime":"2025-12-05T10:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.217020 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.217064 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.217073 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.217082 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.217093 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:29Z","lastTransitionTime":"2025-12-05T10:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.318562 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.318596 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.318627 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.318639 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.318647 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:29Z","lastTransitionTime":"2025-12-05T10:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.421019 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.421083 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.421099 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.421118 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.421130 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:29Z","lastTransitionTime":"2025-12-05T10:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.523159 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.523197 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.523207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.523225 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.523237 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:29Z","lastTransitionTime":"2025-12-05T10:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.625873 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.625920 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.625929 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.625943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.625954 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:29Z","lastTransitionTime":"2025-12-05T10:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.728127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.728171 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.728181 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.728197 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.728209 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:29Z","lastTransitionTime":"2025-12-05T10:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.830575 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.830602 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.830610 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.830623 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.830632 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:29Z","lastTransitionTime":"2025-12-05T10:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.932733 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.932779 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.932789 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.932806 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:29 crc kubenswrapper[4796]: I1205 10:28:29.932815 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:29Z","lastTransitionTime":"2025-12-05T10:28:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.034475 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.034509 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.034519 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.034530 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.034539 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:30Z","lastTransitionTime":"2025-12-05T10:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.136961 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.137093 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.137154 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.137225 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.137280 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:30Z","lastTransitionTime":"2025-12-05T10:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.238957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.239010 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.239021 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.239038 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.239047 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:30Z","lastTransitionTime":"2025-12-05T10:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.341259 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.341305 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.341316 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.341331 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.341340 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:30Z","lastTransitionTime":"2025-12-05T10:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.443858 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.443905 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.443915 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.443932 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.443943 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:30Z","lastTransitionTime":"2025-12-05T10:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.546214 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.546261 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.546272 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.546285 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.546296 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:30Z","lastTransitionTime":"2025-12-05T10:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.648404 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.648469 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.648479 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.648490 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.648498 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:30Z","lastTransitionTime":"2025-12-05T10:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.750837 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.750878 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.750889 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.750905 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.750917 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:30Z","lastTransitionTime":"2025-12-05T10:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.853072 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.853103 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.853134 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.853145 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.853153 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:30Z","lastTransitionTime":"2025-12-05T10:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.954561 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.954613 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.954623 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.954639 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:30 crc kubenswrapper[4796]: I1205 10:28:30.954649 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:30Z","lastTransitionTime":"2025-12-05T10:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.030636 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.030768 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:31 crc kubenswrapper[4796]: E1205 10:28:31.030897 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.030928 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.030962 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:31 crc kubenswrapper[4796]: E1205 10:28:31.031043 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:31 crc kubenswrapper[4796]: E1205 10:28:31.031099 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:31 crc kubenswrapper[4796]: E1205 10:28:31.031150 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.056254 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.056298 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.056306 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.056319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.056329 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:31Z","lastTransitionTime":"2025-12-05T10:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.157979 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.158027 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.158037 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.158058 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.158070 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:31Z","lastTransitionTime":"2025-12-05T10:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.259949 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.259990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.260001 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.260014 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.260027 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:31Z","lastTransitionTime":"2025-12-05T10:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.362101 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.362134 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.362145 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.362156 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.362165 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:31Z","lastTransitionTime":"2025-12-05T10:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.464401 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.464443 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.464469 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.464486 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.464495 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:31Z","lastTransitionTime":"2025-12-05T10:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.566767 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.566841 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.566855 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.566878 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.566895 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:31Z","lastTransitionTime":"2025-12-05T10:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.669445 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.669505 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.669520 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.669538 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.669549 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:31Z","lastTransitionTime":"2025-12-05T10:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.771232 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.771267 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.771276 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.771289 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.771300 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:31Z","lastTransitionTime":"2025-12-05T10:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.873749 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.873785 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.873793 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.873804 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.873813 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:31Z","lastTransitionTime":"2025-12-05T10:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.975567 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.975608 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.975620 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.975634 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:31 crc kubenswrapper[4796]: I1205 10:28:31.975642 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:31Z","lastTransitionTime":"2025-12-05T10:28:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.077882 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.077918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.077927 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.077940 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.077948 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:32Z","lastTransitionTime":"2025-12-05T10:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.179794 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.179828 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.179837 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.179850 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.179859 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:32Z","lastTransitionTime":"2025-12-05T10:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.281312 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.281350 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.281359 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.281371 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.281379 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:32Z","lastTransitionTime":"2025-12-05T10:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.382845 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.382879 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.382889 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.382911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.382920 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:32Z","lastTransitionTime":"2025-12-05T10:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.484993 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.485022 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.485030 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.485041 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.485049 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:32Z","lastTransitionTime":"2025-12-05T10:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.587538 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.587568 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.587576 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.587587 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.587595 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:32Z","lastTransitionTime":"2025-12-05T10:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.689747 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.689792 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.689800 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.689816 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.689824 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:32Z","lastTransitionTime":"2025-12-05T10:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.792055 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.792105 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.792114 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.792127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.792136 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:32Z","lastTransitionTime":"2025-12-05T10:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.893714 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.893752 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.893760 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.893774 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.893782 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:32Z","lastTransitionTime":"2025-12-05T10:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.995863 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.995899 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.995907 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.995921 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:32 crc kubenswrapper[4796]: I1205 10:28:32.995931 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:32Z","lastTransitionTime":"2025-12-05T10:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.030663 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.030677 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.030729 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.030783 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:33 crc kubenswrapper[4796]: E1205 10:28:33.030869 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:33 crc kubenswrapper[4796]: E1205 10:28:33.030952 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:33 crc kubenswrapper[4796]: E1205 10:28:33.031027 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:33 crc kubenswrapper[4796]: E1205 10:28:33.031077 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.097291 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.097320 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.097329 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.097338 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.097347 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:33Z","lastTransitionTime":"2025-12-05T10:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.199133 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.199165 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.199173 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.199187 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.199195 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:33Z","lastTransitionTime":"2025-12-05T10:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.300993 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.301044 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.301054 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.301068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.301076 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:33Z","lastTransitionTime":"2025-12-05T10:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.402675 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.402719 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.402727 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.402739 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.402747 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:33Z","lastTransitionTime":"2025-12-05T10:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.505071 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.505103 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.505112 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.505123 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.505131 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:33Z","lastTransitionTime":"2025-12-05T10:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.606749 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.606778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.606786 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.606799 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.606806 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:33Z","lastTransitionTime":"2025-12-05T10:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.708701 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.708728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.708738 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.708747 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.708754 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:33Z","lastTransitionTime":"2025-12-05T10:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.810388 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.810429 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.810439 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.810453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.810475 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:33Z","lastTransitionTime":"2025-12-05T10:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.911493 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.911525 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.911537 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.911546 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:33 crc kubenswrapper[4796]: I1205 10:28:33.911553 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:33Z","lastTransitionTime":"2025-12-05T10:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.016140 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.016258 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.016325 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.016390 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.016450 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:34Z","lastTransitionTime":"2025-12-05T10:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.039901 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:34Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.048671 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:34Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.061245 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a82596965ccaab3623a597a7d47a047ff2942ff84669ec020a3a2fda0464cb7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a82596965ccaab3623a597a7d47a047ff2942ff84669ec020a3a2fda0464cb7a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:18Z\\\",\\\"message\\\":\\\"tart default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z]\\\\nI1205 10:28:18.699418 6479 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:34Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.069156 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:34Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.075912 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:34Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.083793 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:34Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.092715 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:34Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.101543 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:34Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.109544 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96992bd4-728c-4608-bc6a-df74b8823664\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de894806542062173ba9c06c518d22b16683b49ecd18fa80efe7d4132c719480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8128986b7933bc3672b05ee1df32a49cab9785765d3ece1169d5bc39977af6a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2khms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:34Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.117775 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.117804 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.117812 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.117826 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.117833 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:34Z","lastTransitionTime":"2025-12-05T10:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.122315 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:34Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.130427 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:34Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.138160 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76461bd-d72c-4115-bce7-fb2d280cf460\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464e352ac46b2137117bccb9e78a428dc04fdeaef46f4b91f6ce209776f0a31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8101271bf7294e7870bda826831cfe83ac569186043e35df696bfa5fd9ddcef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced98d292165ffd0447e73f7e0602c556744ead05af25fcb331c676fbec5998d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a47fe81995daaec026a3c545f1c1a7de433b518c6e17cd5e47f8b6c44a55ded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a47fe81995daaec026a3c545f1c1a7de433b518c6e17cd5e47f8b6c44a55ded\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:34Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.146149 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:34Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.154568 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:34Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.163044 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:34Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.171455 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:34Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.178665 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:34Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.184946 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sqdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcf780ba-edff-45ee-88e9-5b99e4d0e458\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sqdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:34Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.219292 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.219322 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.219330 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.219342 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.219351 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:34Z","lastTransitionTime":"2025-12-05T10:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.320918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.320948 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.320958 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.320970 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.320978 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:34Z","lastTransitionTime":"2025-12-05T10:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.422921 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.422958 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.422967 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.422985 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.422994 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:34Z","lastTransitionTime":"2025-12-05T10:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.525234 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.525266 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.525291 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.525306 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.525314 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:34Z","lastTransitionTime":"2025-12-05T10:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.627037 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.627061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.627069 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.627080 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.627089 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:34Z","lastTransitionTime":"2025-12-05T10:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.656170 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.656204 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.656212 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.656227 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.656237 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:34Z","lastTransitionTime":"2025-12-05T10:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:34 crc kubenswrapper[4796]: E1205 10:28:34.664886 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:34Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.667389 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.667424 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.667436 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.667450 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.667459 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:34Z","lastTransitionTime":"2025-12-05T10:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:34 crc kubenswrapper[4796]: E1205 10:28:34.675485 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:34Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.677895 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.677923 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.677932 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.677942 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.677949 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:34Z","lastTransitionTime":"2025-12-05T10:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:34 crc kubenswrapper[4796]: E1205 10:28:34.685604 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:34Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.687826 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.687849 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.687859 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.687869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.687877 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:34Z","lastTransitionTime":"2025-12-05T10:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:34 crc kubenswrapper[4796]: E1205 10:28:34.695109 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:34Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.697190 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.697217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.697226 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.697234 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.697241 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:34Z","lastTransitionTime":"2025-12-05T10:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:34 crc kubenswrapper[4796]: E1205 10:28:34.705425 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:34Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:34 crc kubenswrapper[4796]: E1205 10:28:34.705566 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.728628 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.728651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.728659 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.728670 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.728696 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:34Z","lastTransitionTime":"2025-12-05T10:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.830853 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.831319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.831399 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.831478 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.831543 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:34Z","lastTransitionTime":"2025-12-05T10:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.933261 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.933293 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.933301 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.933312 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:34 crc kubenswrapper[4796]: I1205 10:28:34.933320 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:34Z","lastTransitionTime":"2025-12-05T10:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.030305 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.030401 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.030427 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.030347 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:35 crc kubenswrapper[4796]: E1205 10:28:35.030622 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:35 crc kubenswrapper[4796]: E1205 10:28:35.030757 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:35 crc kubenswrapper[4796]: E1205 10:28:35.030839 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:35 crc kubenswrapper[4796]: E1205 10:28:35.030950 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.034795 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.034828 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.034837 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.034850 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.034860 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:35Z","lastTransitionTime":"2025-12-05T10:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.136740 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.136774 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.136782 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.136796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.136805 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:35Z","lastTransitionTime":"2025-12-05T10:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.238636 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.238665 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.238674 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.238703 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.238712 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:35Z","lastTransitionTime":"2025-12-05T10:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.341039 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.341099 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.341112 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.341135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.341151 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:35Z","lastTransitionTime":"2025-12-05T10:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.443142 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.443179 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.443188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.443200 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.443208 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:35Z","lastTransitionTime":"2025-12-05T10:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.544983 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.545021 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.545032 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.545048 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.545064 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:35Z","lastTransitionTime":"2025-12-05T10:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.646861 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.646908 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.646919 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.646935 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.646953 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:35Z","lastTransitionTime":"2025-12-05T10:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.749185 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.749223 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.749233 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.749247 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.749258 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:35Z","lastTransitionTime":"2025-12-05T10:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.850975 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.851016 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.851029 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.851044 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.851057 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:35Z","lastTransitionTime":"2025-12-05T10:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.953227 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.953258 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.953268 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.953283 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:35 crc kubenswrapper[4796]: I1205 10:28:35.953294 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:35Z","lastTransitionTime":"2025-12-05T10:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.054735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.054774 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.054785 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.054800 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.054811 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:36Z","lastTransitionTime":"2025-12-05T10:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.163781 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.163822 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.163833 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.163847 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.163858 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:36Z","lastTransitionTime":"2025-12-05T10:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.266188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.266300 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.266369 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.266444 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.266537 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:36Z","lastTransitionTime":"2025-12-05T10:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.368751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.368783 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.368793 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.368807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.368817 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:36Z","lastTransitionTime":"2025-12-05T10:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.470514 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.470639 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.470730 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.470811 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.470873 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:36Z","lastTransitionTime":"2025-12-05T10:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.572554 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.572575 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.572583 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.572593 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.572603 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:36Z","lastTransitionTime":"2025-12-05T10:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.674125 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.674152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.674161 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.674192 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.674202 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:36Z","lastTransitionTime":"2025-12-05T10:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.775567 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.775590 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.775598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.775612 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.775633 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:36Z","lastTransitionTime":"2025-12-05T10:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.877624 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.877658 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.877667 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.877679 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.877713 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:36Z","lastTransitionTime":"2025-12-05T10:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.978873 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.979016 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.979092 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.979151 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:36 crc kubenswrapper[4796]: I1205 10:28:36.979221 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:36Z","lastTransitionTime":"2025-12-05T10:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.030063 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.030093 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.030097 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.030118 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:37 crc kubenswrapper[4796]: E1205 10:28:37.030536 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:37 crc kubenswrapper[4796]: E1205 10:28:37.030396 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:37 crc kubenswrapper[4796]: E1205 10:28:37.030582 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:37 crc kubenswrapper[4796]: E1205 10:28:37.030275 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.081105 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.081130 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.081140 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.081151 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.081162 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:37Z","lastTransitionTime":"2025-12-05T10:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.183861 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.183905 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.183914 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.183930 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.183943 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:37Z","lastTransitionTime":"2025-12-05T10:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.285727 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.285769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.285778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.285798 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.285807 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:37Z","lastTransitionTime":"2025-12-05T10:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.387246 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.387284 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.387293 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.387313 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.387323 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:37Z","lastTransitionTime":"2025-12-05T10:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.488541 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.488595 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.488604 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.488626 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.488639 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:37Z","lastTransitionTime":"2025-12-05T10:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.590421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.590463 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.590474 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.590511 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.590523 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:37Z","lastTransitionTime":"2025-12-05T10:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.692768 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.692800 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.692807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.692817 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.692825 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:37Z","lastTransitionTime":"2025-12-05T10:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.794828 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.794855 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.794863 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.794873 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.794880 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:37Z","lastTransitionTime":"2025-12-05T10:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.896725 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.896771 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.896780 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.896796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.896805 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:37Z","lastTransitionTime":"2025-12-05T10:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.998285 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.998329 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.998338 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.998355 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:37 crc kubenswrapper[4796]: I1205 10:28:37.998365 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:37Z","lastTransitionTime":"2025-12-05T10:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.099881 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.099924 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.099934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.099950 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.099963 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:38Z","lastTransitionTime":"2025-12-05T10:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.201398 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.201445 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.201455 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.201470 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.201502 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:38Z","lastTransitionTime":"2025-12-05T10:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.302616 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.302649 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.302657 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.302668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.302676 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:38Z","lastTransitionTime":"2025-12-05T10:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.404796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.404837 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.404851 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.404867 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.404877 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:38Z","lastTransitionTime":"2025-12-05T10:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.507054 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.507099 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.507108 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.507121 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.507131 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:38Z","lastTransitionTime":"2025-12-05T10:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.608807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.608830 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.608841 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.608851 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.608858 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:38Z","lastTransitionTime":"2025-12-05T10:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.711223 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.711255 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.711263 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.711274 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.711283 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:38Z","lastTransitionTime":"2025-12-05T10:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.813036 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.813065 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.813074 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.813087 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.813096 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:38Z","lastTransitionTime":"2025-12-05T10:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.915064 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.915097 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.915106 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.915120 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:38 crc kubenswrapper[4796]: I1205 10:28:38.915130 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:38Z","lastTransitionTime":"2025-12-05T10:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.016540 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.016580 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.016588 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.016602 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.016612 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:39Z","lastTransitionTime":"2025-12-05T10:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.030771 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:39 crc kubenswrapper[4796]: E1205 10:28:39.030975 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.030840 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:39 crc kubenswrapper[4796]: E1205 10:28:39.031143 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.030818 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:39 crc kubenswrapper[4796]: E1205 10:28:39.031518 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.030862 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:39 crc kubenswrapper[4796]: E1205 10:28:39.031712 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.117934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.117984 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.117995 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.118008 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.118015 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:39Z","lastTransitionTime":"2025-12-05T10:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.219642 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.219694 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.219709 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.219724 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.219733 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:39Z","lastTransitionTime":"2025-12-05T10:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.320848 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.320891 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.320901 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.320911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.320919 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:39Z","lastTransitionTime":"2025-12-05T10:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.422309 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.422331 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.422338 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.422349 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.422356 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:39Z","lastTransitionTime":"2025-12-05T10:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.524470 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.524585 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.524763 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.524911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.525040 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:39Z","lastTransitionTime":"2025-12-05T10:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.626800 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.626850 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.626860 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.626869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.626877 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:39Z","lastTransitionTime":"2025-12-05T10:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.728518 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.728545 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.728553 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.728562 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.728569 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:39Z","lastTransitionTime":"2025-12-05T10:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.829996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.830022 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.830030 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.830039 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.830047 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:39Z","lastTransitionTime":"2025-12-05T10:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.931926 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.931950 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.931959 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.931969 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:39 crc kubenswrapper[4796]: I1205 10:28:39.931976 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:39Z","lastTransitionTime":"2025-12-05T10:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.031159 4796 scope.go:117] "RemoveContainer" containerID="a82596965ccaab3623a597a7d47a047ff2942ff84669ec020a3a2fda0464cb7a" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.033107 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.033133 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.033142 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.033151 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.033159 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:40Z","lastTransitionTime":"2025-12-05T10:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.134967 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.135087 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.135097 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.135111 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.135120 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:40Z","lastTransitionTime":"2025-12-05T10:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.238316 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.238409 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.238430 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.238444 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.238452 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:40Z","lastTransitionTime":"2025-12-05T10:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.302845 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xvb5x_d158ce1c-6415-4e69-a1fe-862330b25ff3/ovnkube-controller/2.log" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.304918 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerStarted","Data":"428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb"} Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.305539 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.322847 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a82596965ccaab3623a597a7d47a047ff2942ff84669ec020a3a2fda0464cb7a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:18Z\\\",\\\"message\\\":\\\"tart default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z]\\\\nI1205 10:28:18.699418 6479 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:40Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.335539 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:40Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.340319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.340337 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.340348 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.340358 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.340366 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:40Z","lastTransitionTime":"2025-12-05T10:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.343553 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:40Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.352033 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:40Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.361378 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:40Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.368804 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:40Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.377366 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:40Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.384167 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:40Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.391362 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76461bd-d72c-4115-bce7-fb2d280cf460\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464e352ac46b2137117bccb9e78a428dc04fdeaef46f4b91f6ce209776f0a31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8101271bf7294e7870bda826831cfe83ac569186043e35df696bfa5fd9ddcef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced98d292165ffd0447e73f7e0602c556744ead05af25fcb331c676fbec5998d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a47fe81995daaec026a3c545f1c1a7de433b518c6e17cd5e47f8b6c44a55ded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a47fe81995daaec026a3c545f1c1a7de433b518c6e17cd5e47f8b6c44a55ded\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:40Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.399712 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:40Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.408724 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:40Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.415808 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96992bd4-728c-4608-bc6a-df74b8823664\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de894806542062173ba9c06c518d22b16683b49ecd18fa80efe7d4132c719480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8128986b7933bc3672b05ee1df32a49cab9785765d3ece1169d5bc39977af6a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2khms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:40Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.428806 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:40Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.442123 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.442163 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.442172 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.442186 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.442194 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:40Z","lastTransitionTime":"2025-12-05T10:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.442721 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:40Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.451790 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:40Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.459515 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:40Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.466007 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sqdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcf780ba-edff-45ee-88e9-5b99e4d0e458\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sqdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:40Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.475148 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:40Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.543562 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.543607 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.543618 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.543632 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.543642 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:40Z","lastTransitionTime":"2025-12-05T10:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.645721 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.646514 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.646594 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.646654 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.646745 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:40Z","lastTransitionTime":"2025-12-05T10:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.748731 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.748766 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.748776 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.748790 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.748799 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:40Z","lastTransitionTime":"2025-12-05T10:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.850961 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.850989 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.850997 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.851006 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.851014 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:40Z","lastTransitionTime":"2025-12-05T10:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.904347 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs\") pod \"network-metrics-daemon-sqdfm\" (UID: \"dcf780ba-edff-45ee-88e9-5b99e4d0e458\") " pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:40 crc kubenswrapper[4796]: E1205 10:28:40.904470 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 10:28:40 crc kubenswrapper[4796]: E1205 10:28:40.904643 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs podName:dcf780ba-edff-45ee-88e9-5b99e4d0e458 nodeName:}" failed. No retries permitted until 2025-12-05 10:29:12.904630583 +0000 UTC m=+99.192736096 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs") pod "network-metrics-daemon-sqdfm" (UID: "dcf780ba-edff-45ee-88e9-5b99e4d0e458") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.952588 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.952631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.952642 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.952660 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:40 crc kubenswrapper[4796]: I1205 10:28:40.952669 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:40Z","lastTransitionTime":"2025-12-05T10:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.030421 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.030440 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:41 crc kubenswrapper[4796]: E1205 10:28:41.030533 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.030570 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:41 crc kubenswrapper[4796]: E1205 10:28:41.030649 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:41 crc kubenswrapper[4796]: E1205 10:28:41.030719 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.030427 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:41 crc kubenswrapper[4796]: E1205 10:28:41.030959 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.054168 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.054193 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.054201 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.054212 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.054221 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:41Z","lastTransitionTime":"2025-12-05T10:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.155816 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.155847 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.155856 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.155888 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.155898 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:41Z","lastTransitionTime":"2025-12-05T10:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.257608 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.257849 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.257926 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.257990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.258049 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:41Z","lastTransitionTime":"2025-12-05T10:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.308875 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xvb5x_d158ce1c-6415-4e69-a1fe-862330b25ff3/ovnkube-controller/3.log" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.309372 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xvb5x_d158ce1c-6415-4e69-a1fe-862330b25ff3/ovnkube-controller/2.log" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.311767 4796 generic.go:334] "Generic (PLEG): container finished" podID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerID="428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb" exitCode=1 Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.311833 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerDied","Data":"428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb"} Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.311939 4796 scope.go:117] "RemoveContainer" containerID="a82596965ccaab3623a597a7d47a047ff2942ff84669ec020a3a2fda0464cb7a" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.312308 4796 scope.go:117] "RemoveContainer" containerID="428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb" Dec 05 10:28:41 crc kubenswrapper[4796]: E1205 10:28:41.312443 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.324391 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:41Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.334095 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:41Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.342764 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:41Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.349989 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sqdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcf780ba-edff-45ee-88e9-5b99e4d0e458\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sqdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:41Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.358372 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:41Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.359419 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.359452 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.359463 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.359477 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.359486 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:41Z","lastTransitionTime":"2025-12-05T10:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.367385 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:41Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.379165 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a82596965ccaab3623a597a7d47a047ff2942ff84669ec020a3a2fda0464cb7a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:18Z\\\",\\\"message\\\":\\\"tart default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:18Z is after 2025-08-24T17:21:41Z]\\\\nI1205 10:28:18.699418 6479 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-dns-operator/metrics]} name:Service_openshift-dns-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.174:9393:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d937b3b3-82c3-4791-9a66-41b9fed53e9d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:40Z\\\",\\\"message\\\":\\\"rg/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1205 10:28:40.644678 6793 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:41Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.386846 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:41Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.393230 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:41Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.401169 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:41Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.410703 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:41Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.417186 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:41Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.429657 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:41Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.437404 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:41Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.444403 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76461bd-d72c-4115-bce7-fb2d280cf460\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464e352ac46b2137117bccb9e78a428dc04fdeaef46f4b91f6ce209776f0a31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8101271bf7294e7870bda826831cfe83ac569186043e35df696bfa5fd9ddcef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced98d292165ffd0447e73f7e0602c556744ead05af25fcb331c676fbec5998d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a47fe81995daaec026a3c545f1c1a7de433b518c6e17cd5e47f8b6c44a55ded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a47fe81995daaec026a3c545f1c1a7de433b518c6e17cd5e47f8b6c44a55ded\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:41Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.451815 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:41Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.459074 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:41Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.461283 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.461349 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.461359 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.461374 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.461383 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:41Z","lastTransitionTime":"2025-12-05T10:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.467376 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96992bd4-728c-4608-bc6a-df74b8823664\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de894806542062173ba9c06c518d22b16683b49ecd18fa80efe7d4132c719480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8128986b7933bc3672b05ee1df32a49cab9785765d3ece1169d5bc39977af6a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2khms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:41Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.562860 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.562901 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.562910 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.562926 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.562937 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:41Z","lastTransitionTime":"2025-12-05T10:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.665249 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.665285 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.665296 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.665309 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.665318 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:41Z","lastTransitionTime":"2025-12-05T10:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.767166 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.767197 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.767205 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.767218 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.767226 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:41Z","lastTransitionTime":"2025-12-05T10:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.869134 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.869165 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.869176 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.869188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.869196 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:41Z","lastTransitionTime":"2025-12-05T10:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.971067 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.971089 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.971096 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.971107 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:41 crc kubenswrapper[4796]: I1205 10:28:41.971114 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:41Z","lastTransitionTime":"2025-12-05T10:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.072273 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.072319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.072330 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.072346 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.072358 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:42Z","lastTransitionTime":"2025-12-05T10:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.174027 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.174049 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.174057 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.174075 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.174083 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:42Z","lastTransitionTime":"2025-12-05T10:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.275766 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.275796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.275804 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.275814 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.275821 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:42Z","lastTransitionTime":"2025-12-05T10:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.315064 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xvb5x_d158ce1c-6415-4e69-a1fe-862330b25ff3/ovnkube-controller/3.log" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.317452 4796 scope.go:117] "RemoveContainer" containerID="428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb" Dec 05 10:28:42 crc kubenswrapper[4796]: E1205 10:28:42.317596 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.318102 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cqj7h_7d541e60-9b92-4b9d-be51-5bd87e76deac/kube-multus/0.log" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.318137 4796 generic.go:334] "Generic (PLEG): container finished" podID="7d541e60-9b92-4b9d-be51-5bd87e76deac" containerID="4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01" exitCode=1 Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.318158 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cqj7h" event={"ID":"7d541e60-9b92-4b9d-be51-5bd87e76deac","Type":"ContainerDied","Data":"4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01"} Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.318408 4796 scope.go:117] "RemoveContainer" containerID="4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.327635 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.333760 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.345851 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.355749 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.362420 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.374651 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.377338 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.377361 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.377370 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.377383 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.377392 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:42Z","lastTransitionTime":"2025-12-05T10:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.384013 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.393229 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76461bd-d72c-4115-bce7-fb2d280cf460\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464e352ac46b2137117bccb9e78a428dc04fdeaef46f4b91f6ce209776f0a31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8101271bf7294e7870bda826831cfe83ac569186043e35df696bfa5fd9ddcef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced98d292165ffd0447e73f7e0602c556744ead05af25fcb331c676fbec5998d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a47fe81995daaec026a3c545f1c1a7de433b518c6e17cd5e47f8b6c44a55ded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a47fe81995daaec026a3c545f1c1a7de433b518c6e17cd5e47f8b6c44a55ded\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.401432 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.409461 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.416914 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96992bd4-728c-4608-bc6a-df74b8823664\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de894806542062173ba9c06c518d22b16683b49ecd18fa80efe7d4132c719480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8128986b7933bc3672b05ee1df32a49cab9785765d3ece1169d5bc39977af6a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2khms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.426727 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.435837 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.444285 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.452146 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sqdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcf780ba-edff-45ee-88e9-5b99e4d0e458\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sqdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.460450 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.467994 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.479253 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.479299 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.479328 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.479343 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.479354 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:42Z","lastTransitionTime":"2025-12-05T10:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.481791 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:40Z\\\",\\\"message\\\":\\\"rg/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1205 10:28:40.644678 6793 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.490602 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.498259 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.505997 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sqdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcf780ba-edff-45ee-88e9-5b99e4d0e458\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sqdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.515030 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.527582 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:40Z\\\",\\\"message\\\":\\\"rg/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1205 10:28:40.644678 6793 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.536854 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.545287 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.554209 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:42Z\\\",\\\"message\\\":\\\"2025-12-05T10:27:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bfc8cff1-0c1d-472e-9857-6b3f321e587a\\\\n2025-12-05T10:27:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bfc8cff1-0c1d-472e-9857-6b3f321e587a to /host/opt/cni/bin/\\\\n2025-12-05T10:27:57Z [verbose] multus-daemon started\\\\n2025-12-05T10:27:57Z [verbose] Readiness Indicator file check\\\\n2025-12-05T10:28:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.564574 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.572320 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.579620 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.580658 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.580701 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.580712 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.580725 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.580733 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:42Z","lastTransitionTime":"2025-12-05T10:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.586959 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.594384 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76461bd-d72c-4115-bce7-fb2d280cf460\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464e352ac46b2137117bccb9e78a428dc04fdeaef46f4b91f6ce209776f0a31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8101271bf7294e7870bda826831cfe83ac569186043e35df696bfa5fd9ddcef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced98d292165ffd0447e73f7e0602c556744ead05af25fcb331c676fbec5998d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a47fe81995daaec026a3c545f1c1a7de433b518c6e17cd5e47f8b6c44a55ded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a47fe81995daaec026a3c545f1c1a7de433b518c6e17cd5e47f8b6c44a55ded\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.601704 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.609473 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.617361 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96992bd4-728c-4608-bc6a-df74b8823664\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de894806542062173ba9c06c518d22b16683b49ecd18fa80efe7d4132c719480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8128986b7933bc3672b05ee1df32a49cab9785765d3ece1169d5bc39977af6a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2khms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.635743 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.644246 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:42Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.682534 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.682633 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.682722 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.682803 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.682871 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:42Z","lastTransitionTime":"2025-12-05T10:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.784766 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.785544 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.785624 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.785710 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.785790 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:42Z","lastTransitionTime":"2025-12-05T10:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.887492 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.887612 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.887668 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.887772 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.887843 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:42Z","lastTransitionTime":"2025-12-05T10:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.989478 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.989522 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.989532 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.989545 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:42 crc kubenswrapper[4796]: I1205 10:28:42.989555 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:42Z","lastTransitionTime":"2025-12-05T10:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.030858 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.030939 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:43 crc kubenswrapper[4796]: E1205 10:28:43.030976 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.031041 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:43 crc kubenswrapper[4796]: E1205 10:28:43.031042 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:43 crc kubenswrapper[4796]: E1205 10:28:43.031089 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.031100 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:43 crc kubenswrapper[4796]: E1205 10:28:43.031153 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.090979 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.091002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.091010 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.091023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.091032 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:43Z","lastTransitionTime":"2025-12-05T10:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.192369 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.192399 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.192410 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.192422 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.192431 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:43Z","lastTransitionTime":"2025-12-05T10:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.294083 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.294119 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.294128 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.294143 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.294152 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:43Z","lastTransitionTime":"2025-12-05T10:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.321953 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cqj7h_7d541e60-9b92-4b9d-be51-5bd87e76deac/kube-multus/0.log" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.321993 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cqj7h" event={"ID":"7d541e60-9b92-4b9d-be51-5bd87e76deac","Type":"ContainerStarted","Data":"a560e3361c3254d165aaa6d35ccfa35a96aa2453c5a9a8168abd8e442280a65f"} Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.335895 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:40Z\\\",\\\"message\\\":\\\"rg/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1205 10:28:40.644678 6793 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:43Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.343750 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:43Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.351000 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:43Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.359425 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a560e3361c3254d165aaa6d35ccfa35a96aa2453c5a9a8168abd8e442280a65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:42Z\\\",\\\"message\\\":\\\"2025-12-05T10:27:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bfc8cff1-0c1d-472e-9857-6b3f321e587a\\\\n2025-12-05T10:27:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bfc8cff1-0c1d-472e-9857-6b3f321e587a to /host/opt/cni/bin/\\\\n2025-12-05T10:27:57Z [verbose] multus-daemon started\\\\n2025-12-05T10:27:57Z [verbose] Readiness Indicator file check\\\\n2025-12-05T10:28:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:43Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.368617 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:43Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.375124 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:43Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.382302 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:43Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.388737 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:43Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.395616 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76461bd-d72c-4115-bce7-fb2d280cf460\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464e352ac46b2137117bccb9e78a428dc04fdeaef46f4b91f6ce209776f0a31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8101271bf7294e7870bda826831cfe83ac569186043e35df696bfa5fd9ddcef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced98d292165ffd0447e73f7e0602c556744ead05af25fcb331c676fbec5998d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a47fe81995daaec026a3c545f1c1a7de433b518c6e17cd5e47f8b6c44a55ded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a47fe81995daaec026a3c545f1c1a7de433b518c6e17cd5e47f8b6c44a55ded\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:43Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.396179 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.396207 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.396216 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.396227 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.396235 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:43Z","lastTransitionTime":"2025-12-05T10:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.403013 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:43Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.410465 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:43Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.417404 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96992bd4-728c-4608-bc6a-df74b8823664\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de894806542062173ba9c06c518d22b16683b49ecd18fa80efe7d4132c719480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8128986b7933bc3672b05ee1df32a49cab9785765d3ece1169d5bc39977af6a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2khms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:43Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.430334 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:43Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.437941 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:43Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.450030 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:43Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.457730 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:43Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.464029 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sqdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcf780ba-edff-45ee-88e9-5b99e4d0e458\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sqdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:43Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.472233 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:43Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.497836 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.497888 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.497902 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.497915 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.497924 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:43Z","lastTransitionTime":"2025-12-05T10:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.599542 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.599562 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.599570 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.599580 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.599589 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:43Z","lastTransitionTime":"2025-12-05T10:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.700878 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.700900 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.700908 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.700918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.700926 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:43Z","lastTransitionTime":"2025-12-05T10:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.802905 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.802934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.802961 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.802971 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.802978 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:43Z","lastTransitionTime":"2025-12-05T10:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.904962 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.904990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.905000 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.905014 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:43 crc kubenswrapper[4796]: I1205 10:28:43.905023 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:43Z","lastTransitionTime":"2025-12-05T10:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.006416 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.006444 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.006453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.006463 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.006470 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:44Z","lastTransitionTime":"2025-12-05T10:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.040759 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:44Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.048804 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:44Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.060780 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:40Z\\\",\\\"message\\\":\\\"rg/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1205 10:28:40.644678 6793 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:44Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.068196 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:44Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.074637 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:44Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.082381 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a560e3361c3254d165aaa6d35ccfa35a96aa2453c5a9a8168abd8e442280a65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:42Z\\\",\\\"message\\\":\\\"2025-12-05T10:27:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bfc8cff1-0c1d-472e-9857-6b3f321e587a\\\\n2025-12-05T10:27:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bfc8cff1-0c1d-472e-9857-6b3f321e587a to /host/opt/cni/bin/\\\\n2025-12-05T10:27:57Z [verbose] multus-daemon started\\\\n2025-12-05T10:27:57Z [verbose] Readiness Indicator file check\\\\n2025-12-05T10:28:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:44Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.091133 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:44Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.100314 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:44Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.107404 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.107436 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.107445 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.107455 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.107462 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:44Z","lastTransitionTime":"2025-12-05T10:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.113181 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:44Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.121647 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:44Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.128664 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76461bd-d72c-4115-bce7-fb2d280cf460\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464e352ac46b2137117bccb9e78a428dc04fdeaef46f4b91f6ce209776f0a31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8101271bf7294e7870bda826831cfe83ac569186043e35df696bfa5fd9ddcef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced98d292165ffd0447e73f7e0602c556744ead05af25fcb331c676fbec5998d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a47fe81995daaec026a3c545f1c1a7de433b518c6e17cd5e47f8b6c44a55ded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a47fe81995daaec026a3c545f1c1a7de433b518c6e17cd5e47f8b6c44a55ded\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:44Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.136178 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:44Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.143559 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:44Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.150232 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96992bd4-728c-4608-bc6a-df74b8823664\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de894806542062173ba9c06c518d22b16683b49ecd18fa80efe7d4132c719480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8128986b7933bc3672b05ee1df32a49cab9785765d3ece1169d5bc39977af6a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2khms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:44Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.158747 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:44Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.167078 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:44Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.174447 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:44Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.181895 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sqdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcf780ba-edff-45ee-88e9-5b99e4d0e458\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sqdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:44Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.209278 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.209306 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.209315 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.209328 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.209337 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:44Z","lastTransitionTime":"2025-12-05T10:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.310542 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.310562 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.310582 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.310593 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.310600 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:44Z","lastTransitionTime":"2025-12-05T10:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.411841 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.411862 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.411872 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.411886 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.411896 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:44Z","lastTransitionTime":"2025-12-05T10:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.513186 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.513208 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.513217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.513227 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.513234 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:44Z","lastTransitionTime":"2025-12-05T10:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.614899 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.614926 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.614935 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.614944 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.614951 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:44Z","lastTransitionTime":"2025-12-05T10:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.717576 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.717621 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.717629 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.717646 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.717655 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:44Z","lastTransitionTime":"2025-12-05T10:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.805753 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.805791 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.805800 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.805812 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.805819 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:44Z","lastTransitionTime":"2025-12-05T10:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:44 crc kubenswrapper[4796]: E1205 10:28:44.816281 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:44Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.819051 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.819084 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.819093 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.819107 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.819115 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:44Z","lastTransitionTime":"2025-12-05T10:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:44 crc kubenswrapper[4796]: E1205 10:28:44.828679 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:44Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.830830 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.830857 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.830866 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.830877 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.830883 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:44Z","lastTransitionTime":"2025-12-05T10:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:44 crc kubenswrapper[4796]: E1205 10:28:44.838916 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:44Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.845816 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.845853 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.845863 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.845877 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.845885 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:44Z","lastTransitionTime":"2025-12-05T10:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:44 crc kubenswrapper[4796]: E1205 10:28:44.854469 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:44Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.857644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.857769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.857857 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.857940 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.858020 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:44Z","lastTransitionTime":"2025-12-05T10:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:44 crc kubenswrapper[4796]: E1205 10:28:44.866567 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:44Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:44 crc kubenswrapper[4796]: E1205 10:28:44.866835 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.867848 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.867964 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.868052 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.868139 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.868218 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:44Z","lastTransitionTime":"2025-12-05T10:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.970159 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.970290 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.970353 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.970410 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:44 crc kubenswrapper[4796]: I1205 10:28:44.970462 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:44Z","lastTransitionTime":"2025-12-05T10:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.030983 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:45 crc kubenswrapper[4796]: E1205 10:28:45.031078 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.031131 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.031167 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:45 crc kubenswrapper[4796]: E1205 10:28:45.031227 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:45 crc kubenswrapper[4796]: E1205 10:28:45.031298 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.031001 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:45 crc kubenswrapper[4796]: E1205 10:28:45.031603 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.072443 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.072468 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.072477 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.072489 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.072510 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:45Z","lastTransitionTime":"2025-12-05T10:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.174275 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.174298 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.174305 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.174315 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.174321 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:45Z","lastTransitionTime":"2025-12-05T10:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.276336 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.276362 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.276370 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.276381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.276388 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:45Z","lastTransitionTime":"2025-12-05T10:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.378251 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.378280 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.378288 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.378299 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.378308 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:45Z","lastTransitionTime":"2025-12-05T10:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.480037 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.480064 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.480071 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.480081 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.480089 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:45Z","lastTransitionTime":"2025-12-05T10:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.581364 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.581402 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.581411 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.581420 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.581427 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:45Z","lastTransitionTime":"2025-12-05T10:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.683774 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.683801 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.683811 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.683823 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.683832 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:45Z","lastTransitionTime":"2025-12-05T10:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.786108 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.786138 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.786147 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.786161 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.786170 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:45Z","lastTransitionTime":"2025-12-05T10:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.887897 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.888103 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.888182 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.888242 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.888297 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:45Z","lastTransitionTime":"2025-12-05T10:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.990300 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.990326 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.990335 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.990344 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:45 crc kubenswrapper[4796]: I1205 10:28:45.990351 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:45Z","lastTransitionTime":"2025-12-05T10:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.092093 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.092534 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.092642 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.092743 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.092926 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:46Z","lastTransitionTime":"2025-12-05T10:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.194262 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.194286 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.194295 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.194308 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.194317 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:46Z","lastTransitionTime":"2025-12-05T10:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.296011 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.296042 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.296050 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.296061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.296069 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:46Z","lastTransitionTime":"2025-12-05T10:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.397215 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.397239 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.397248 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.397266 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.397275 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:46Z","lastTransitionTime":"2025-12-05T10:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.498917 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.498945 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.498952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.498978 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.498987 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:46Z","lastTransitionTime":"2025-12-05T10:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.600637 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.600875 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.600885 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.600899 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.600924 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:46Z","lastTransitionTime":"2025-12-05T10:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.703897 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.703927 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.703935 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.703946 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.703954 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:46Z","lastTransitionTime":"2025-12-05T10:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.805463 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.805514 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.805525 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.805539 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.805547 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:46Z","lastTransitionTime":"2025-12-05T10:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.907071 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.907095 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.907103 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.907113 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:46 crc kubenswrapper[4796]: I1205 10:28:46.907119 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:46Z","lastTransitionTime":"2025-12-05T10:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.008887 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.008909 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.008925 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.008934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.008942 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:47Z","lastTransitionTime":"2025-12-05T10:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.030670 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:47 crc kubenswrapper[4796]: E1205 10:28:47.030763 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.030809 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:47 crc kubenswrapper[4796]: E1205 10:28:47.030848 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.030886 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:47 crc kubenswrapper[4796]: E1205 10:28:47.030921 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.030951 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:47 crc kubenswrapper[4796]: E1205 10:28:47.030983 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.110672 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.110719 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.110727 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.110736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.110743 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:47Z","lastTransitionTime":"2025-12-05T10:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.211979 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.212001 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.212008 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.212017 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.212023 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:47Z","lastTransitionTime":"2025-12-05T10:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.313800 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.313820 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.313827 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.313835 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.313843 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:47Z","lastTransitionTime":"2025-12-05T10:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.415897 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.415936 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.415946 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.415962 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.415972 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:47Z","lastTransitionTime":"2025-12-05T10:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.518194 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.518342 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.518543 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.518713 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.518850 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:47Z","lastTransitionTime":"2025-12-05T10:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.621470 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.621508 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.621516 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.621528 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.621537 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:47Z","lastTransitionTime":"2025-12-05T10:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.722647 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.722832 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.722995 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.723157 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.723329 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:47Z","lastTransitionTime":"2025-12-05T10:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.824872 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.824962 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.825017 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.825072 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.825120 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:47Z","lastTransitionTime":"2025-12-05T10:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.927952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.927985 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.927998 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.928012 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:47 crc kubenswrapper[4796]: I1205 10:28:47.928022 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:47Z","lastTransitionTime":"2025-12-05T10:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.029359 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.029533 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.029607 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.029671 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.029751 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:48Z","lastTransitionTime":"2025-12-05T10:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.131155 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.131465 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.131544 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.131606 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.131673 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:48Z","lastTransitionTime":"2025-12-05T10:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.232777 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.232870 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.232948 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.233014 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.233069 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:48Z","lastTransitionTime":"2025-12-05T10:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.334781 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.334802 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.334813 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.334822 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.334828 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:48Z","lastTransitionTime":"2025-12-05T10:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.436036 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.436058 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.436066 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.436075 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.436082 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:48Z","lastTransitionTime":"2025-12-05T10:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.537884 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.538005 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.538071 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.538127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.538183 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:48Z","lastTransitionTime":"2025-12-05T10:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.639536 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.639558 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.639566 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.639575 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.639582 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:48Z","lastTransitionTime":"2025-12-05T10:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.741807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.741848 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.741857 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.741871 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.741879 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:48Z","lastTransitionTime":"2025-12-05T10:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.843404 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.843508 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.843581 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.843649 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.843731 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:48Z","lastTransitionTime":"2025-12-05T10:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.945431 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.945476 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.945484 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.945505 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:48 crc kubenswrapper[4796]: I1205 10:28:48.945515 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:48Z","lastTransitionTime":"2025-12-05T10:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.030295 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.030380 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.030390 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:49 crc kubenswrapper[4796]: E1205 10:28:49.030572 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.030606 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:49 crc kubenswrapper[4796]: E1205 10:28:49.030761 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:49 crc kubenswrapper[4796]: E1205 10:28:49.030779 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:49 crc kubenswrapper[4796]: E1205 10:28:49.030672 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.047110 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.047136 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.047145 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.047155 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.047162 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:49Z","lastTransitionTime":"2025-12-05T10:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.148671 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.148982 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.149063 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.149140 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.149198 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:49Z","lastTransitionTime":"2025-12-05T10:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.251383 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.251416 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.251425 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.251435 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.251442 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:49Z","lastTransitionTime":"2025-12-05T10:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.352856 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.352891 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.352899 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.352911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.352919 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:49Z","lastTransitionTime":"2025-12-05T10:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.454998 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.455033 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.455042 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.455054 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.455063 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:49Z","lastTransitionTime":"2025-12-05T10:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.556714 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.556747 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.556755 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.556769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.556778 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:49Z","lastTransitionTime":"2025-12-05T10:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.658354 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.658388 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.658397 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.658409 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.658418 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:49Z","lastTransitionTime":"2025-12-05T10:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.759947 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.759973 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.759981 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.759993 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.760001 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:49Z","lastTransitionTime":"2025-12-05T10:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.862228 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.862261 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.862271 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.862283 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.862293 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:49Z","lastTransitionTime":"2025-12-05T10:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.964030 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.964061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.964069 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.964079 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:49 crc kubenswrapper[4796]: I1205 10:28:49.964085 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:49Z","lastTransitionTime":"2025-12-05T10:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.065833 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.065859 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.065866 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.065875 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.065881 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:50Z","lastTransitionTime":"2025-12-05T10:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.167303 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.167331 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.167340 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.167349 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.167356 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:50Z","lastTransitionTime":"2025-12-05T10:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.269088 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.269106 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.269114 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.269124 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.269131 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:50Z","lastTransitionTime":"2025-12-05T10:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.371113 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.371256 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.371327 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.371401 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.371469 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:50Z","lastTransitionTime":"2025-12-05T10:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.473007 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.473037 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.473045 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.473054 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.473061 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:50Z","lastTransitionTime":"2025-12-05T10:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.575239 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.575273 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.575281 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.575294 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.575302 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:50Z","lastTransitionTime":"2025-12-05T10:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.677175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.677206 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.677215 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.677228 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.677237 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:50Z","lastTransitionTime":"2025-12-05T10:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.778933 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.778964 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.778974 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.778995 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.779010 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:50Z","lastTransitionTime":"2025-12-05T10:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.880931 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.880965 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.880974 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.880986 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.880994 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:50Z","lastTransitionTime":"2025-12-05T10:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.982953 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.982982 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.982990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.983001 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:50 crc kubenswrapper[4796]: I1205 10:28:50.983009 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:50Z","lastTransitionTime":"2025-12-05T10:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.030658 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:51 crc kubenswrapper[4796]: E1205 10:28:51.030759 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.030803 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.030817 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:51 crc kubenswrapper[4796]: E1205 10:28:51.030891 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:51 crc kubenswrapper[4796]: E1205 10:28:51.030970 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.031077 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:51 crc kubenswrapper[4796]: E1205 10:28:51.031205 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.085163 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.085269 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.085333 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.085400 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.085463 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:51Z","lastTransitionTime":"2025-12-05T10:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.187022 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.187048 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.187059 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.187071 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.187079 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:51Z","lastTransitionTime":"2025-12-05T10:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.288873 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.289031 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.289092 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.289158 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.289223 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:51Z","lastTransitionTime":"2025-12-05T10:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.391241 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.391290 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.391300 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.391309 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.391317 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:51Z","lastTransitionTime":"2025-12-05T10:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.493276 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.493304 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.493312 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.493323 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.493330 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:51Z","lastTransitionTime":"2025-12-05T10:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.595819 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.595899 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.595912 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.595942 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.595957 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:51Z","lastTransitionTime":"2025-12-05T10:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.697380 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.697421 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.697429 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.697441 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.697458 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:51Z","lastTransitionTime":"2025-12-05T10:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.799838 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.799912 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.799923 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.799938 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.799947 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:51Z","lastTransitionTime":"2025-12-05T10:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.901830 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.901867 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.901880 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.901891 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:51 crc kubenswrapper[4796]: I1205 10:28:51.901901 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:51Z","lastTransitionTime":"2025-12-05T10:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.003751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.003812 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.003825 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.003847 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.003862 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:52Z","lastTransitionTime":"2025-12-05T10:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.105843 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.105891 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.105905 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.105921 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.105933 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:52Z","lastTransitionTime":"2025-12-05T10:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.208204 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.208243 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.208251 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.208267 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.208279 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:52Z","lastTransitionTime":"2025-12-05T10:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.310364 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.310406 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.310419 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.310434 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.310445 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:52Z","lastTransitionTime":"2025-12-05T10:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.412177 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.412216 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.412226 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.412241 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.412252 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:52Z","lastTransitionTime":"2025-12-05T10:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.514284 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.514317 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.514325 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.514338 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.514346 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:52Z","lastTransitionTime":"2025-12-05T10:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.616330 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.616360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.616370 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.616381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.616389 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:52Z","lastTransitionTime":"2025-12-05T10:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.718129 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.718155 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.718163 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.718176 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.718184 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:52Z","lastTransitionTime":"2025-12-05T10:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.819753 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.819804 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.819813 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.819822 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.819829 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:52Z","lastTransitionTime":"2025-12-05T10:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.921498 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.921535 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.921543 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.921556 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:52 crc kubenswrapper[4796]: I1205 10:28:52.921564 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:52Z","lastTransitionTime":"2025-12-05T10:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.023524 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.023546 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.023554 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.023564 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.023571 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:53Z","lastTransitionTime":"2025-12-05T10:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.031002 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.031063 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.031070 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.031020 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:53 crc kubenswrapper[4796]: E1205 10:28:53.031131 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:53 crc kubenswrapper[4796]: E1205 10:28:53.031232 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:53 crc kubenswrapper[4796]: E1205 10:28:53.031285 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:53 crc kubenswrapper[4796]: E1205 10:28:53.031367 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.125217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.125298 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.125309 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.125318 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.125325 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:53Z","lastTransitionTime":"2025-12-05T10:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.227387 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.227423 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.227431 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.227445 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.227453 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:53Z","lastTransitionTime":"2025-12-05T10:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.329407 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.329438 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.329447 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.329458 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.329466 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:53Z","lastTransitionTime":"2025-12-05T10:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.431160 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.431202 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.431212 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.431239 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.431248 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:53Z","lastTransitionTime":"2025-12-05T10:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.532709 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.532751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.532759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.532772 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.532782 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:53Z","lastTransitionTime":"2025-12-05T10:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.634415 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.634473 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.634483 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.634526 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.634535 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:53Z","lastTransitionTime":"2025-12-05T10:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.735909 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.735940 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.735966 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.735980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.735990 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:53Z","lastTransitionTime":"2025-12-05T10:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.837812 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.837845 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.837853 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.837865 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.837872 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:53Z","lastTransitionTime":"2025-12-05T10:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.939772 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.939924 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.940016 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.940093 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:53 crc kubenswrapper[4796]: I1205 10:28:53.940160 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:53Z","lastTransitionTime":"2025-12-05T10:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.037158 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.042239 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.042262 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.042270 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.042280 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.042288 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:54Z","lastTransitionTime":"2025-12-05T10:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.044146 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.052152 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.059639 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76461bd-d72c-4115-bce7-fb2d280cf460\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464e352ac46b2137117bccb9e78a428dc04fdeaef46f4b91f6ce209776f0a31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8101271bf7294e7870bda826831cfe83ac569186043e35df696bfa5fd9ddcef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced98d292165ffd0447e73f7e0602c556744ead05af25fcb331c676fbec5998d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a47fe81995daaec026a3c545f1c1a7de433b518c6e17cd5e47f8b6c44a55ded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a47fe81995daaec026a3c545f1c1a7de433b518c6e17cd5e47f8b6c44a55ded\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.067381 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.074785 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.081571 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96992bd4-728c-4608-bc6a-df74b8823664\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de894806542062173ba9c06c518d22b16683b49ecd18fa80efe7d4132c719480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8128986b7933bc3672b05ee1df32a49cab9785765d3ece1169d5bc39977af6a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2khms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.089738 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.097603 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.110794 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.117217 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sqdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcf780ba-edff-45ee-88e9-5b99e4d0e458\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sqdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.124813 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.131408 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.142376 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:40Z\\\",\\\"message\\\":\\\"rg/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1205 10:28:40.644678 6793 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.143485 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.143518 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.143526 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.143541 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.143549 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:54Z","lastTransitionTime":"2025-12-05T10:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.149206 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.155643 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.163795 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a560e3361c3254d165aaa6d35ccfa35a96aa2453c5a9a8168abd8e442280a65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:42Z\\\",\\\"message\\\":\\\"2025-12-05T10:27:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bfc8cff1-0c1d-472e-9857-6b3f321e587a\\\\n2025-12-05T10:27:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bfc8cff1-0c1d-472e-9857-6b3f321e587a to /host/opt/cni/bin/\\\\n2025-12-05T10:27:57Z [verbose] multus-daemon started\\\\n2025-12-05T10:27:57Z [verbose] Readiness Indicator file check\\\\n2025-12-05T10:28:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.172373 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.178627 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:54Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.245310 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.245340 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.245349 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.245361 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.245369 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:54Z","lastTransitionTime":"2025-12-05T10:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.347358 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.347397 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.347405 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.347418 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.347426 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:54Z","lastTransitionTime":"2025-12-05T10:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.449191 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.449223 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.449231 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.449242 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.449251 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:54Z","lastTransitionTime":"2025-12-05T10:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.551019 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.551048 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.551056 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.551069 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.551079 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:54Z","lastTransitionTime":"2025-12-05T10:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.652653 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.652707 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.652719 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.652732 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.652741 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:54Z","lastTransitionTime":"2025-12-05T10:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.754234 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.754261 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.754269 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.754279 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.754287 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:54Z","lastTransitionTime":"2025-12-05T10:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.855724 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.855750 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.855757 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.855767 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.855775 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:54Z","lastTransitionTime":"2025-12-05T10:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.921358 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:28:54 crc kubenswrapper[4796]: E1205 10:28:54.921437 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:58.921420673 +0000 UTC m=+145.209526187 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.957436 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.957464 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.957473 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.957482 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:54 crc kubenswrapper[4796]: I1205 10:28:54.957504 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:54Z","lastTransitionTime":"2025-12-05T10:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.022288 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.022324 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.022341 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.022371 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:55 crc kubenswrapper[4796]: E1205 10:28:55.022447 4796 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 10:28:55 crc kubenswrapper[4796]: E1205 10:28:55.022456 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 10:28:55 crc kubenswrapper[4796]: E1205 10:28:55.022478 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 10:28:55 crc kubenswrapper[4796]: E1205 10:28:55.022486 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 10:29:59.022474415 +0000 UTC m=+145.310579929 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 05 10:28:55 crc kubenswrapper[4796]: E1205 10:28:55.022489 4796 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 10:28:55 crc kubenswrapper[4796]: E1205 10:28:55.022516 4796 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:28:55 crc kubenswrapper[4796]: E1205 10:28:55.022562 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-05 10:29:59.02254568 +0000 UTC m=+145.310651203 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 05 10:28:55 crc kubenswrapper[4796]: E1205 10:28:55.022604 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 05 10:28:55 crc kubenswrapper[4796]: E1205 10:28:55.022622 4796 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 05 10:28:55 crc kubenswrapper[4796]: E1205 10:28:55.022611 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-05 10:29:59.022595012 +0000 UTC m=+145.310700535 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:28:55 crc kubenswrapper[4796]: E1205 10:28:55.022634 4796 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:28:55 crc kubenswrapper[4796]: E1205 10:28:55.022661 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-05 10:29:59.022651779 +0000 UTC m=+145.310757292 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.030969 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.030999 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.030981 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.030969 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:55 crc kubenswrapper[4796]: E1205 10:28:55.031060 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:55 crc kubenswrapper[4796]: E1205 10:28:55.031118 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:55 crc kubenswrapper[4796]: E1205 10:28:55.031169 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:55 crc kubenswrapper[4796]: E1205 10:28:55.031213 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.059230 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.059257 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.059265 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.059273 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.059281 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:55Z","lastTransitionTime":"2025-12-05T10:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.093368 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.093402 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.093413 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.093425 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.093436 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:55Z","lastTransitionTime":"2025-12-05T10:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:55 crc kubenswrapper[4796]: E1205 10:28:55.101900 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:55Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.104023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.104046 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.104054 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.104063 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.104072 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:55Z","lastTransitionTime":"2025-12-05T10:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:55 crc kubenswrapper[4796]: E1205 10:28:55.111548 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:55Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.113444 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.113467 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.113476 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.113486 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.113503 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:55Z","lastTransitionTime":"2025-12-05T10:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:55 crc kubenswrapper[4796]: E1205 10:28:55.121120 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:55Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.123183 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.123239 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.123247 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.123257 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.123264 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:55Z","lastTransitionTime":"2025-12-05T10:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:55 crc kubenswrapper[4796]: E1205 10:28:55.130708 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:55Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.132482 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.132522 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.132530 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.132540 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.132547 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:55Z","lastTransitionTime":"2025-12-05T10:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:55 crc kubenswrapper[4796]: E1205 10:28:55.139699 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:28:55Z is after 2025-08-24T17:21:41Z" Dec 05 10:28:55 crc kubenswrapper[4796]: E1205 10:28:55.139799 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.160706 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.160731 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.160739 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.160750 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.160758 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:55Z","lastTransitionTime":"2025-12-05T10:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.262736 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.262767 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.262776 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.262789 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.262796 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:55Z","lastTransitionTime":"2025-12-05T10:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.363997 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.364042 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.364053 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.364068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.364078 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:55Z","lastTransitionTime":"2025-12-05T10:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.466039 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.466086 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.466096 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.466109 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.466118 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:55Z","lastTransitionTime":"2025-12-05T10:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.568259 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.568284 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.568293 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.568302 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.568311 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:55Z","lastTransitionTime":"2025-12-05T10:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.669931 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.669951 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.669958 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.669968 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.669976 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:55Z","lastTransitionTime":"2025-12-05T10:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.771974 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.772002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.772010 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.772019 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.772026 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:55Z","lastTransitionTime":"2025-12-05T10:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.873734 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.873769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.873777 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.873789 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.873798 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:55Z","lastTransitionTime":"2025-12-05T10:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.975816 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.975867 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.975878 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.975890 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:55 crc kubenswrapper[4796]: I1205 10:28:55.975898 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:55Z","lastTransitionTime":"2025-12-05T10:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.077231 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.077359 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.077438 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.077532 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.077588 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:56Z","lastTransitionTime":"2025-12-05T10:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.179398 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.179532 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.179600 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.179653 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.179733 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:56Z","lastTransitionTime":"2025-12-05T10:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.281287 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.281339 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.281349 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.281361 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.281370 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:56Z","lastTransitionTime":"2025-12-05T10:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.383318 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.383354 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.383362 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.383375 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.383384 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:56Z","lastTransitionTime":"2025-12-05T10:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.484817 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.484846 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.484854 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.484866 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.484873 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:56Z","lastTransitionTime":"2025-12-05T10:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.585975 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.586012 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.586023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.586035 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.586043 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:56Z","lastTransitionTime":"2025-12-05T10:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.687271 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.687306 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.687314 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.687325 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.687333 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:56Z","lastTransitionTime":"2025-12-05T10:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.789307 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.789339 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.789348 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.789360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.789368 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:56Z","lastTransitionTime":"2025-12-05T10:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.890900 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.891013 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.891073 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.891139 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.891187 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:56Z","lastTransitionTime":"2025-12-05T10:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.992782 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.992813 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.992821 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.992833 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:56 crc kubenswrapper[4796]: I1205 10:28:56.992841 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:56Z","lastTransitionTime":"2025-12-05T10:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.030127 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.030159 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:57 crc kubenswrapper[4796]: E1205 10:28:57.030215 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.030252 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.030338 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:57 crc kubenswrapper[4796]: E1205 10:28:57.030340 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:57 crc kubenswrapper[4796]: E1205 10:28:57.030595 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:57 crc kubenswrapper[4796]: E1205 10:28:57.030631 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.030821 4796 scope.go:117] "RemoveContainer" containerID="428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb" Dec 05 10:28:57 crc kubenswrapper[4796]: E1205 10:28:57.030999 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.095241 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.095380 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.095463 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.095543 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.095605 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:57Z","lastTransitionTime":"2025-12-05T10:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.197645 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.197694 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.197704 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.197717 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.197726 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:57Z","lastTransitionTime":"2025-12-05T10:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.299913 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.299950 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.299958 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.299971 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.299979 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:57Z","lastTransitionTime":"2025-12-05T10:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.401859 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.401889 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.401897 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.401909 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.401918 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:57Z","lastTransitionTime":"2025-12-05T10:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.503537 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.503586 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.503597 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.503609 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.503617 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:57Z","lastTransitionTime":"2025-12-05T10:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.605165 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.605195 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.605204 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.605213 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.605221 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:57Z","lastTransitionTime":"2025-12-05T10:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.706817 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.706852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.706870 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.706883 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.706895 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:57Z","lastTransitionTime":"2025-12-05T10:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.808392 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.808457 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.808470 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.808480 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.808487 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:57Z","lastTransitionTime":"2025-12-05T10:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.910332 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.910360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.910369 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.910380 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:57 crc kubenswrapper[4796]: I1205 10:28:57.910387 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:57Z","lastTransitionTime":"2025-12-05T10:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.012404 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.012442 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.012451 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.012465 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.012474 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:58Z","lastTransitionTime":"2025-12-05T10:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.114351 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.114383 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.114392 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.114403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.114411 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:58Z","lastTransitionTime":"2025-12-05T10:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.218851 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.218882 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.218891 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.218905 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.218913 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:58Z","lastTransitionTime":"2025-12-05T10:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.320506 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.320541 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.320551 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.321212 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.321223 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:58Z","lastTransitionTime":"2025-12-05T10:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.423102 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.423132 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.423141 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.423153 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.423160 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:58Z","lastTransitionTime":"2025-12-05T10:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.524631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.524655 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.524663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.524673 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.524694 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:58Z","lastTransitionTime":"2025-12-05T10:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.626953 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.626992 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.627003 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.627018 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.627027 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:58Z","lastTransitionTime":"2025-12-05T10:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.728394 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.728482 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.728555 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.728614 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.728704 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:58Z","lastTransitionTime":"2025-12-05T10:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.830211 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.830249 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.830261 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.830281 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.830290 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:58Z","lastTransitionTime":"2025-12-05T10:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.932322 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.932351 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.932359 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.932370 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:58 crc kubenswrapper[4796]: I1205 10:28:58.932378 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:58Z","lastTransitionTime":"2025-12-05T10:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.030417 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.030463 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:28:59 crc kubenswrapper[4796]: E1205 10:28:59.030540 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.030441 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:28:59 crc kubenswrapper[4796]: E1205 10:28:59.030636 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:28:59 crc kubenswrapper[4796]: E1205 10:28:59.030735 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.030816 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:28:59 crc kubenswrapper[4796]: E1205 10:28:59.030936 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.033915 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.033950 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.033959 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.033969 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.033979 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:59Z","lastTransitionTime":"2025-12-05T10:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.136088 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.136125 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.136136 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.136150 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.136158 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:59Z","lastTransitionTime":"2025-12-05T10:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.237548 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.237671 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.237785 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.237854 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.237941 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:59Z","lastTransitionTime":"2025-12-05T10:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.339937 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.340268 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.340328 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.340400 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.340464 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:59Z","lastTransitionTime":"2025-12-05T10:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.442101 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.442127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.442135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.442144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.442152 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:59Z","lastTransitionTime":"2025-12-05T10:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.543735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.543777 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.543787 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.543800 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.543811 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:59Z","lastTransitionTime":"2025-12-05T10:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.645230 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.645257 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.645266 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.645278 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.645285 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:59Z","lastTransitionTime":"2025-12-05T10:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.746596 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.746618 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.746626 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.746635 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.746642 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:59Z","lastTransitionTime":"2025-12-05T10:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.848399 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.848427 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.848436 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.848449 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.848456 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:59Z","lastTransitionTime":"2025-12-05T10:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.950237 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.950262 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.950269 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.950278 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:28:59 crc kubenswrapper[4796]: I1205 10:28:59.950285 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:28:59Z","lastTransitionTime":"2025-12-05T10:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.051614 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.051644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.051653 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.051663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.051671 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:00Z","lastTransitionTime":"2025-12-05T10:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.152866 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.152897 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.152905 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.152936 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.152945 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:00Z","lastTransitionTime":"2025-12-05T10:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.255341 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.255368 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.255376 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.255385 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.255392 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:00Z","lastTransitionTime":"2025-12-05T10:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.356464 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.356502 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.356511 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.356521 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.356529 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:00Z","lastTransitionTime":"2025-12-05T10:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.458016 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.458040 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.458047 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.458056 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.458063 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:00Z","lastTransitionTime":"2025-12-05T10:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.559898 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.560051 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.560135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.560217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.560289 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:00Z","lastTransitionTime":"2025-12-05T10:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.661386 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.661409 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.661420 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.661430 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.661437 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:00Z","lastTransitionTime":"2025-12-05T10:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.763040 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.763093 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.763104 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.763119 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.763127 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:00Z","lastTransitionTime":"2025-12-05T10:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.864812 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.864843 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.864869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.864881 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.864889 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:00Z","lastTransitionTime":"2025-12-05T10:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.967083 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.967118 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.967129 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.967142 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:00 crc kubenswrapper[4796]: I1205 10:29:00.967152 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:00Z","lastTransitionTime":"2025-12-05T10:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.030944 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.031182 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:29:01 crc kubenswrapper[4796]: E1205 10:29:01.031240 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.031266 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.031278 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:29:01 crc kubenswrapper[4796]: E1205 10:29:01.031366 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:29:01 crc kubenswrapper[4796]: E1205 10:29:01.031425 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:29:01 crc kubenswrapper[4796]: E1205 10:29:01.031520 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.068734 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.068769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.068779 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.068796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.068806 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:01Z","lastTransitionTime":"2025-12-05T10:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.170749 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.170776 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.170785 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.170813 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.170821 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:01Z","lastTransitionTime":"2025-12-05T10:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.272252 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.272293 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.272301 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.272357 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.272368 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:01Z","lastTransitionTime":"2025-12-05T10:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.374279 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.374306 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.374315 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.374342 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.374351 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:01Z","lastTransitionTime":"2025-12-05T10:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.476111 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.476140 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.476147 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.476157 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.476165 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:01Z","lastTransitionTime":"2025-12-05T10:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.577928 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.577969 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.577979 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.577994 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.578007 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:01Z","lastTransitionTime":"2025-12-05T10:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.680067 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.680090 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.680099 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.680109 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.680116 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:01Z","lastTransitionTime":"2025-12-05T10:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.782275 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.782345 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.782366 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.782394 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.782411 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:01Z","lastTransitionTime":"2025-12-05T10:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.885274 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.885328 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.885338 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.885352 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.885362 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:01Z","lastTransitionTime":"2025-12-05T10:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.987425 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.987447 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.987457 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.987468 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:01 crc kubenswrapper[4796]: I1205 10:29:01.987476 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:01Z","lastTransitionTime":"2025-12-05T10:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.088795 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.088931 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.089006 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.089080 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.089137 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:02Z","lastTransitionTime":"2025-12-05T10:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.191200 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.191320 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.191403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.191495 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.191564 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:02Z","lastTransitionTime":"2025-12-05T10:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.293584 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.293637 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.293649 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.293661 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.293668 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:02Z","lastTransitionTime":"2025-12-05T10:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.394758 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.394786 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.394795 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.394805 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.394814 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:02Z","lastTransitionTime":"2025-12-05T10:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.495874 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.495990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.496064 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.496136 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.496201 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:02Z","lastTransitionTime":"2025-12-05T10:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.597950 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.597978 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.597986 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.597996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.598004 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:02Z","lastTransitionTime":"2025-12-05T10:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.699902 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.700021 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.700084 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.700162 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.700230 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:02Z","lastTransitionTime":"2025-12-05T10:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.801319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.801351 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.801361 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.801375 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.801385 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:02Z","lastTransitionTime":"2025-12-05T10:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.903294 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.903329 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.903337 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.903350 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:02 crc kubenswrapper[4796]: I1205 10:29:02.903358 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:02Z","lastTransitionTime":"2025-12-05T10:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.005764 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.005796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.005805 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.005823 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.005833 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:03Z","lastTransitionTime":"2025-12-05T10:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.030074 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.030094 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.030097 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:29:03 crc kubenswrapper[4796]: E1205 10:29:03.030167 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.030083 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:29:03 crc kubenswrapper[4796]: E1205 10:29:03.030248 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:29:03 crc kubenswrapper[4796]: E1205 10:29:03.030300 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:29:03 crc kubenswrapper[4796]: E1205 10:29:03.030336 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.107876 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.107901 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.107909 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.107919 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.107927 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:03Z","lastTransitionTime":"2025-12-05T10:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.209976 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.210008 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.210017 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.210029 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.210037 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:03Z","lastTransitionTime":"2025-12-05T10:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.312125 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.312172 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.312181 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.312199 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.312207 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:03Z","lastTransitionTime":"2025-12-05T10:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.413823 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.413850 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.413858 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.413869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.413878 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:03Z","lastTransitionTime":"2025-12-05T10:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.514872 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.514901 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.514909 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.514918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.514926 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:03Z","lastTransitionTime":"2025-12-05T10:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.616474 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.616518 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.616527 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.616539 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.616551 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:03Z","lastTransitionTime":"2025-12-05T10:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.718251 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.718306 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.718316 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.718329 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.718340 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:03Z","lastTransitionTime":"2025-12-05T10:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.819906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.819943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.819951 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.819962 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.819969 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:03Z","lastTransitionTime":"2025-12-05T10:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.921871 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.921897 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.921904 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.921915 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:03 crc kubenswrapper[4796]: I1205 10:29:03.921922 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:03Z","lastTransitionTime":"2025-12-05T10:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.023869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.023893 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.023900 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.023908 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.023915 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:04Z","lastTransitionTime":"2025-12-05T10:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.040231 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.047706 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76461bd-d72c-4115-bce7-fb2d280cf460\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464e352ac46b2137117bccb9e78a428dc04fdeaef46f4b91f6ce209776f0a31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8101271bf7294e7870bda826831cfe83ac569186043e35df696bfa5fd9ddcef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced98d292165ffd0447e73f7e0602c556744ead05af25fcb331c676fbec5998d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a47fe81995daaec026a3c545f1c1a7de433b518c6e17cd5e47f8b6c44a55ded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a47fe81995daaec026a3c545f1c1a7de433b518c6e17cd5e47f8b6c44a55ded\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.055828 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.062885 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.069599 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96992bd4-728c-4608-bc6a-df74b8823664\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de894806542062173ba9c06c518d22b16683b49ecd18fa80efe7d4132c719480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8128986b7933bc3672b05ee1df32a49cab9785765d3ece1169d5bc39977af6a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2khms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.082240 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.090600 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.098828 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.107024 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.113605 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sqdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcf780ba-edff-45ee-88e9-5b99e4d0e458\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sqdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.120011 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bf706c1-9a18-4a04-8c0f-2048e862c35e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a445a8402d0df1de83bf6eae592b3fec8ffd202b76dc5e093b3d5a06b9e38f51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e554379579217035feeceb2e5573308297b4dfa8d3d1436369be3b12c65c39ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e554379579217035feeceb2e5573308297b4dfa8d3d1436369be3b12c65c39ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.125900 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.125927 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.125937 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.125949 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.125957 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:04Z","lastTransitionTime":"2025-12-05T10:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.126844 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.138083 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:40Z\\\",\\\"message\\\":\\\"rg/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1205 10:28:40.644678 6793 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.145715 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.151975 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.159868 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a560e3361c3254d165aaa6d35ccfa35a96aa2453c5a9a8168abd8e442280a65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:42Z\\\",\\\"message\\\":\\\"2025-12-05T10:27:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bfc8cff1-0c1d-472e-9857-6b3f321e587a\\\\n2025-12-05T10:27:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bfc8cff1-0c1d-472e-9857-6b3f321e587a to /host/opt/cni/bin/\\\\n2025-12-05T10:27:57Z [verbose] multus-daemon started\\\\n2025-12-05T10:27:57Z [verbose] Readiness Indicator file check\\\\n2025-12-05T10:28:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.168502 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.174438 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.181863 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:04Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.227137 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.227166 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.227177 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.227190 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.227198 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:04Z","lastTransitionTime":"2025-12-05T10:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.328827 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.328865 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.328873 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.328883 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.328892 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:04Z","lastTransitionTime":"2025-12-05T10:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.430992 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.431042 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.431051 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.431064 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.431073 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:04Z","lastTransitionTime":"2025-12-05T10:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.532875 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.532906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.532914 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.532926 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.532935 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:04Z","lastTransitionTime":"2025-12-05T10:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.634575 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.634634 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.634644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.634655 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.634663 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:04Z","lastTransitionTime":"2025-12-05T10:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.736348 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.736373 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.736381 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.736390 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.736398 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:04Z","lastTransitionTime":"2025-12-05T10:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.838141 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.838177 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.838216 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.838229 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.838239 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:04Z","lastTransitionTime":"2025-12-05T10:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.939837 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.939863 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.939870 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.939879 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:04 crc kubenswrapper[4796]: I1205 10:29:04.939886 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:04Z","lastTransitionTime":"2025-12-05T10:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.030570 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:29:05 crc kubenswrapper[4796]: E1205 10:29:05.030659 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.030577 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.030570 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:29:05 crc kubenswrapper[4796]: E1205 10:29:05.030754 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.030582 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:29:05 crc kubenswrapper[4796]: E1205 10:29:05.030880 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:29:05 crc kubenswrapper[4796]: E1205 10:29:05.030944 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.041137 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.041167 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.041180 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.041190 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.041197 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:05Z","lastTransitionTime":"2025-12-05T10:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.142831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.142865 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.142875 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.142888 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.142896 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:05Z","lastTransitionTime":"2025-12-05T10:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.244905 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.244936 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.244945 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.244985 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.244997 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:05Z","lastTransitionTime":"2025-12-05T10:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.346751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.346779 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.346787 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.346796 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.346804 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:05Z","lastTransitionTime":"2025-12-05T10:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.448233 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.448261 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.448269 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.448278 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.448285 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:05Z","lastTransitionTime":"2025-12-05T10:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.528752 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.528784 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.528794 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.528804 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.528815 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:05Z","lastTransitionTime":"2025-12-05T10:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:05 crc kubenswrapper[4796]: E1205 10:29:05.537582 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:05Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.539840 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.539860 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.539867 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.539877 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.539884 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:05Z","lastTransitionTime":"2025-12-05T10:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:05 crc kubenswrapper[4796]: E1205 10:29:05.547227 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:05Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.549133 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.549157 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.549165 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.549174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.549181 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:05Z","lastTransitionTime":"2025-12-05T10:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:05 crc kubenswrapper[4796]: E1205 10:29:05.557043 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:05Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.558892 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.558918 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.558926 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.558936 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.558944 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:05Z","lastTransitionTime":"2025-12-05T10:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:05 crc kubenswrapper[4796]: E1205 10:29:05.566208 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:05Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.568027 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.568051 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.568059 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.568067 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.568074 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:05Z","lastTransitionTime":"2025-12-05T10:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:05 crc kubenswrapper[4796]: E1205 10:29:05.576079 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:05Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:05 crc kubenswrapper[4796]: E1205 10:29:05.576191 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.576956 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.577002 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.577012 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.577027 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.577036 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:05Z","lastTransitionTime":"2025-12-05T10:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.678869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.678891 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.678900 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.678910 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.678918 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:05Z","lastTransitionTime":"2025-12-05T10:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.780468 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.780509 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.780522 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.780546 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.780555 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:05Z","lastTransitionTime":"2025-12-05T10:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.882079 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.882109 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.882118 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.882131 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.882139 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:05Z","lastTransitionTime":"2025-12-05T10:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.983672 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.983744 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.983755 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.983768 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:05 crc kubenswrapper[4796]: I1205 10:29:05.983778 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:05Z","lastTransitionTime":"2025-12-05T10:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.085414 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.085437 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.085444 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.085453 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.085459 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:06Z","lastTransitionTime":"2025-12-05T10:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.187242 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.187266 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.187273 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.187281 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.187289 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:06Z","lastTransitionTime":"2025-12-05T10:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.288975 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.289005 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.289014 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.289024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.289032 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:06Z","lastTransitionTime":"2025-12-05T10:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.390567 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.390604 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.390613 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.390626 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.390635 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:06Z","lastTransitionTime":"2025-12-05T10:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.492585 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.492620 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.492629 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.492642 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.492651 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:06Z","lastTransitionTime":"2025-12-05T10:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.594128 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.594173 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.594182 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.594192 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.594199 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:06Z","lastTransitionTime":"2025-12-05T10:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.696410 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.696440 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.696447 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.696459 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.696466 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:06Z","lastTransitionTime":"2025-12-05T10:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.798528 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.798558 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.798568 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.798578 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.798586 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:06Z","lastTransitionTime":"2025-12-05T10:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.900315 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.900346 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.900356 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.900367 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:06 crc kubenswrapper[4796]: I1205 10:29:06.900376 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:06Z","lastTransitionTime":"2025-12-05T10:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.002378 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.002405 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.002414 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.002424 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.002431 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:07Z","lastTransitionTime":"2025-12-05T10:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.031027 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.031046 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.031106 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:29:07 crc kubenswrapper[4796]: E1205 10:29:07.031225 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.031245 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:29:07 crc kubenswrapper[4796]: E1205 10:29:07.031377 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:29:07 crc kubenswrapper[4796]: E1205 10:29:07.031412 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:29:07 crc kubenswrapper[4796]: E1205 10:29:07.031661 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.104442 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.104467 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.104476 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.104496 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.104504 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:07Z","lastTransitionTime":"2025-12-05T10:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.206219 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.206252 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.206263 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.206277 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.206288 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:07Z","lastTransitionTime":"2025-12-05T10:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.308112 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.308141 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.308152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.308163 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.308171 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:07Z","lastTransitionTime":"2025-12-05T10:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.409404 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.409438 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.409448 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.409460 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.409469 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:07Z","lastTransitionTime":"2025-12-05T10:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.511659 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.511708 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.511717 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.511729 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.511736 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:07Z","lastTransitionTime":"2025-12-05T10:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.613510 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.613541 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.613549 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.613561 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.613571 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:07Z","lastTransitionTime":"2025-12-05T10:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.715442 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.715465 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.715473 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.715493 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.715502 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:07Z","lastTransitionTime":"2025-12-05T10:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.817993 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.818026 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.818034 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.818047 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.818055 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:07Z","lastTransitionTime":"2025-12-05T10:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.919797 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.919830 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.919840 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.919852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:07 crc kubenswrapper[4796]: I1205 10:29:07.919860 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:07Z","lastTransitionTime":"2025-12-05T10:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.022018 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.022056 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.022064 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.022078 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.022093 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:08Z","lastTransitionTime":"2025-12-05T10:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.123548 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.123571 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.123578 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.123591 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.123598 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:08Z","lastTransitionTime":"2025-12-05T10:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.225268 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.225368 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.225437 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.225513 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.225578 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:08Z","lastTransitionTime":"2025-12-05T10:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.327471 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.327504 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.327512 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.327521 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.327527 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:08Z","lastTransitionTime":"2025-12-05T10:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.429593 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.429620 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.429628 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.429637 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.429646 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:08Z","lastTransitionTime":"2025-12-05T10:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.531348 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.531389 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.531399 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.531413 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.531423 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:08Z","lastTransitionTime":"2025-12-05T10:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.633358 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.633388 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.633398 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.633409 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.633416 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:08Z","lastTransitionTime":"2025-12-05T10:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.735041 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.735060 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.735068 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.735079 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.735086 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:08Z","lastTransitionTime":"2025-12-05T10:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.836821 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.836852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.836863 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.836874 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.836882 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:08Z","lastTransitionTime":"2025-12-05T10:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.937949 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.937976 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.937984 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.937994 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:08 crc kubenswrapper[4796]: I1205 10:29:08.938001 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:08Z","lastTransitionTime":"2025-12-05T10:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.030705 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.030730 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:29:09 crc kubenswrapper[4796]: E1205 10:29:09.030797 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.030711 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.030839 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:29:09 crc kubenswrapper[4796]: E1205 10:29:09.030888 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:29:09 crc kubenswrapper[4796]: E1205 10:29:09.030918 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:29:09 crc kubenswrapper[4796]: E1205 10:29:09.030965 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.039255 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.039285 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.039294 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.039306 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.039315 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:09Z","lastTransitionTime":"2025-12-05T10:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.140798 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.140826 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.140834 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.140843 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.140850 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:09Z","lastTransitionTime":"2025-12-05T10:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.242545 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.242571 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.242599 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.242610 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.242617 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:09Z","lastTransitionTime":"2025-12-05T10:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.345147 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.345174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.345182 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.345193 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.345220 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:09Z","lastTransitionTime":"2025-12-05T10:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.446594 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.446621 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.446630 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.446639 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.446647 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:09Z","lastTransitionTime":"2025-12-05T10:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.548336 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.548386 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.548396 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.548407 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.548415 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:09Z","lastTransitionTime":"2025-12-05T10:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.649886 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.649905 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.649913 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.649921 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.649928 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:09Z","lastTransitionTime":"2025-12-05T10:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.752081 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.752108 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.752116 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.752127 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.752134 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:09Z","lastTransitionTime":"2025-12-05T10:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.853983 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.854025 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.854032 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.854047 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.854056 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:09Z","lastTransitionTime":"2025-12-05T10:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.956355 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.956384 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.956392 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.956402 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:09 crc kubenswrapper[4796]: I1205 10:29:09.956409 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:09Z","lastTransitionTime":"2025-12-05T10:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.057575 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.057603 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.057612 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.057621 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.057629 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:10Z","lastTransitionTime":"2025-12-05T10:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.159720 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.159760 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.159769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.159782 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.159793 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:10Z","lastTransitionTime":"2025-12-05T10:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.261749 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.261780 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.261788 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.261799 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.261807 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:10Z","lastTransitionTime":"2025-12-05T10:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.363257 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.363286 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.363295 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.363331 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.363339 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:10Z","lastTransitionTime":"2025-12-05T10:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.464987 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.465025 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.465033 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.465047 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.465054 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:10Z","lastTransitionTime":"2025-12-05T10:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.566781 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.566814 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.566824 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.566836 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.566845 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:10Z","lastTransitionTime":"2025-12-05T10:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.669037 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.669080 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.669088 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.669102 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.669111 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:10Z","lastTransitionTime":"2025-12-05T10:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.771273 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.771312 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.771321 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.771335 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.771345 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:10Z","lastTransitionTime":"2025-12-05T10:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.872947 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.872984 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.872992 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.873004 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.873012 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:10Z","lastTransitionTime":"2025-12-05T10:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.975050 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.975074 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.975082 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.975091 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:10 crc kubenswrapper[4796]: I1205 10:29:10.975098 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:10Z","lastTransitionTime":"2025-12-05T10:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.030636 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.030664 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.030669 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:29:11 crc kubenswrapper[4796]: E1205 10:29:11.030747 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.030641 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:29:11 crc kubenswrapper[4796]: E1205 10:29:11.030871 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:29:11 crc kubenswrapper[4796]: E1205 10:29:11.031156 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:29:11 crc kubenswrapper[4796]: E1205 10:29:11.031198 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.031320 4796 scope.go:117] "RemoveContainer" containerID="428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb" Dec 05 10:29:11 crc kubenswrapper[4796]: E1205 10:29:11.031457 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.076266 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.076286 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.076301 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.076313 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.076319 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:11Z","lastTransitionTime":"2025-12-05T10:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.177648 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.177677 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.177702 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.177713 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.177721 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:11Z","lastTransitionTime":"2025-12-05T10:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.279319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.279348 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.279357 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.279371 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.279379 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:11Z","lastTransitionTime":"2025-12-05T10:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.380557 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.380602 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.380612 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.380627 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.380638 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:11Z","lastTransitionTime":"2025-12-05T10:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.482718 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.482760 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.482769 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.482783 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.482794 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:11Z","lastTransitionTime":"2025-12-05T10:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.584898 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.584943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.584955 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.584972 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.584982 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:11Z","lastTransitionTime":"2025-12-05T10:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.687060 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.687099 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.687108 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.687122 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.687133 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:11Z","lastTransitionTime":"2025-12-05T10:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.788970 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.789001 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.789009 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.789024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.789031 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:11Z","lastTransitionTime":"2025-12-05T10:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.891019 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.891050 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.891059 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.891072 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.891081 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:11Z","lastTransitionTime":"2025-12-05T10:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.992748 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.992782 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.992790 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.992803 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:11 crc kubenswrapper[4796]: I1205 10:29:11.992813 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:11Z","lastTransitionTime":"2025-12-05T10:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.096499 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.096539 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.096610 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.096631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.096642 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:12Z","lastTransitionTime":"2025-12-05T10:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.198840 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.198868 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.198876 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.198887 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.198895 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:12Z","lastTransitionTime":"2025-12-05T10:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.300982 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.301017 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.301025 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.301037 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.301045 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:12Z","lastTransitionTime":"2025-12-05T10:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.402735 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.402764 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.402774 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.402784 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.402792 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:12Z","lastTransitionTime":"2025-12-05T10:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.504914 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.504936 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.504944 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.504953 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.504960 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:12Z","lastTransitionTime":"2025-12-05T10:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.606646 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.606672 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.606695 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.606705 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.606713 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:12Z","lastTransitionTime":"2025-12-05T10:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.708374 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.708402 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.708411 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.708422 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.708432 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:12Z","lastTransitionTime":"2025-12-05T10:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.810146 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.810169 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.810177 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.810188 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.810195 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:12Z","lastTransitionTime":"2025-12-05T10:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.911962 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.911988 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.911997 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.912007 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.912014 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:12Z","lastTransitionTime":"2025-12-05T10:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:12 crc kubenswrapper[4796]: I1205 10:29:12.964744 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs\") pod \"network-metrics-daemon-sqdfm\" (UID: \"dcf780ba-edff-45ee-88e9-5b99e4d0e458\") " pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:29:12 crc kubenswrapper[4796]: E1205 10:29:12.964865 4796 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 10:29:12 crc kubenswrapper[4796]: E1205 10:29:12.964909 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs podName:dcf780ba-edff-45ee-88e9-5b99e4d0e458 nodeName:}" failed. No retries permitted until 2025-12-05 10:30:16.964890023 +0000 UTC m=+163.252995537 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs") pod "network-metrics-daemon-sqdfm" (UID: "dcf780ba-edff-45ee-88e9-5b99e4d0e458") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.013358 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.013378 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.013385 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.013397 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.013404 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:13Z","lastTransitionTime":"2025-12-05T10:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.030806 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.030824 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.030847 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:29:13 crc kubenswrapper[4796]: E1205 10:29:13.030896 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.030817 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:29:13 crc kubenswrapper[4796]: E1205 10:29:13.030958 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:29:13 crc kubenswrapper[4796]: E1205 10:29:13.030996 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:29:13 crc kubenswrapper[4796]: E1205 10:29:13.031041 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.115560 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.115589 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.115598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.115607 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.115615 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:13Z","lastTransitionTime":"2025-12-05T10:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.217728 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.217756 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.217765 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.217778 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.217790 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:13Z","lastTransitionTime":"2025-12-05T10:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.319679 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.319722 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.319730 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.319742 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.319750 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:13Z","lastTransitionTime":"2025-12-05T10:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.420966 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.420995 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.421005 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.421014 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.421021 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:13Z","lastTransitionTime":"2025-12-05T10:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.523100 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.523125 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.523134 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.523143 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.523151 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:13Z","lastTransitionTime":"2025-12-05T10:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.624491 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.624518 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.624529 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.624540 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.624548 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:13Z","lastTransitionTime":"2025-12-05T10:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.725800 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.725831 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.725841 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.725851 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.725859 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:13Z","lastTransitionTime":"2025-12-05T10:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.827875 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.827914 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.827922 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.827935 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.827944 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:13Z","lastTransitionTime":"2025-12-05T10:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.929108 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.929135 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.929143 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.929153 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:13 crc kubenswrapper[4796]: I1205 10:29:13.929160 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:13Z","lastTransitionTime":"2025-12-05T10:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.030644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.030677 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.030704 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.030759 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.030770 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:14Z","lastTransitionTime":"2025-12-05T10:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.038153 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-sbxd4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"057c024f-9d49-40a3-81b5-e6fa91b46d53\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f836cc09c2cf90ea171dfa1e736ba5c44addd1cf6737cb7a861de3148a9f1c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdjhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-sbxd4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.045607 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91c62806950949f28d583d509ae72340ae26230b794ca0855c0859660a16f712\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.052514 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tz2w5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0158bdbd-94bb-4421-b698-bdcbe2e7f37b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f575feafc92282973dbc2122ccc008f78799121bcf748c299139d988c080f42f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ww4p7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tz2w5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.060212 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-cqj7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d541e60-9b92-4b9d-be51-5bd87e76deac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a560e3361c3254d165aaa6d35ccfa35a96aa2453c5a9a8168abd8e442280a65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:42Z\\\",\\\"message\\\":\\\"2025-12-05T10:27:56+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bfc8cff1-0c1d-472e-9857-6b3f321e587a\\\\n2025-12-05T10:27:56+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bfc8cff1-0c1d-472e-9857-6b3f321e587a to /host/opt/cni/bin/\\\\n2025-12-05T10:27:57Z [verbose] multus-daemon started\\\\n2025-12-05T10:27:57Z [verbose] Readiness Indicator file check\\\\n2025-12-05T10:28:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmx6j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-cqj7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.068764 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e20d03d5-f7b8-4044-8ee1-151ddfbf6aa7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b40b8ab914ef492da0e5f190bc1e0227047ca7d3e73d90c363e2316028830518\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb49ef37337a9807d74600e6a26fd5936ef7ba51d858579853e9a7788a6ae035\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://081060ce37e76f97f457408a4a7973e7d96ca4322483413e0eceda82c4340e18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e630fb0f0abf144d87d316e4af888590ac4340ce07cdb1e52a30cb0285b31243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8f3f1dc241f01044fd2f43adc98ac67f109738b163340d7aa71b50c04f9bb582\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://133eabea6d33c4bf986fce8568e61e5c2c81a19f170bd74ce78acb8c083561fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df039fb769da94bcde9a8ea46441fc8ac650969fa414c955d039f6c040699d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:28:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chrfj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ct8sh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.076247 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.082994 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96992bd4-728c-4608-bc6a-df74b8823664\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de894806542062173ba9c06c518d22b16683b49ecd18fa80efe7d4132c719480\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8128986b7933bc3672b05ee1df32a49cab9785765d3ece1169d5bc39977af6a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:28:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5hsvg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2khms\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.094424 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"27161b6d-9207-4300-9e06-922500976e3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eeb61283fc6483c27967cc9aec0f447afed95fead48934d84bf6cbe6e9f8314a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c7f0eab4a8c259e73d09b3f541a1a062e622f59f6dee83a9e9f117196c96cb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://77da1cc981330ec62bb4932442cbba7ddb77edc73f07a2860bd5ee14c662828a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f45e392233dbda59f24b9421a3a4c07ccdeac23a5623089c06f2f09ac9c1fc3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc073289a2134ac5162c9843e7766401b214994005e8ad213969efec0125d29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://071b71f18c3f2a912b633c637bb754e0cc7f3a58390c9689702d55644240f8b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2793b0b4dc6b83602b68aa7f718f34b2fa8e921b483a48615c85a5bd434a1b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://263e22e49a22fd3eafc309b3efdceaeaf992a522c25ecb2ca6eb0981379910bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.103113 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03eeafb-c11f-4937-ac87-36f9655918f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d74879edbd9f2c142bd2c781e12dba34d9c057faf790e8a01b13132b77774f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd8b7dd6edf0eceddfcc8c3eb9b44e5d8ff933778192b3ba31931494edd0aa0a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://310450a98a7db8cfb21ce5cb3343a41818c07fcf7b8629ff88be531d1d413aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.112817 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c76461bd-d72c-4115-bce7-fb2d280cf460\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464e352ac46b2137117bccb9e78a428dc04fdeaef46f4b91f6ce209776f0a31d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8101271bf7294e7870bda826831cfe83ac569186043e35df696bfa5fd9ddcef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ced98d292165ffd0447e73f7e0602c556744ead05af25fcb331c676fbec5998d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a47fe81995daaec026a3c545f1c1a7de433b518c6e17cd5e47f8b6c44a55ded\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a47fe81995daaec026a3c545f1c1a7de433b518c6e17cd5e47f8b6c44a55ded\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.120325 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.126880 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sqdfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcf780ba-edff-45ee-88e9-5b99e4d0e458\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gxz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:28:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sqdfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.131915 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.131941 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.131950 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.131961 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.131969 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:14Z","lastTransitionTime":"2025-12-05T10:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.133647 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7bf706c1-9a18-4a04-8c0f-2048e862c35e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a445a8402d0df1de83bf6eae592b3fec8ffd202b76dc5e093b3d5a06b9e38f51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e554379579217035feeceb2e5573308297b4dfa8d3d1436369be3b12c65c39ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e554379579217035feeceb2e5573308297b4dfa8d3d1436369be3b12c65c39ca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.142980 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a02bec29-3b71-4b9d-ac9b-1226f8747525\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:28:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"le observer\\\\nW1205 10:27:50.665794 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1205 10:27:50.665920 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1205 10:27:50.668666 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2660507503/tls.crt::/tmp/serving-cert-2660507503/tls.key\\\\\\\"\\\\nI1205 10:27:50.889727 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1205 10:27:50.892184 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1205 10:27:50.892208 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1205 10:27:50.892235 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1205 10:27:50.892245 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1205 10:27:50.897135 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1205 10:27:50.897155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897159 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1205 10:27:50.897162 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1205 10:27:50.897165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1205 10:27:50.897168 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1205 10:27:50.897170 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1205 10:27:50.897169 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1205 10:27:50.898617 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:34Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.150995 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a13c3434a66a4bf57fc911ea98872fbce3da3a363a195163ebef64a378c519f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.158760 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.166451 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bc85f35efe64fe937b54680c518aaf63d2f043ab3f853357e1624ba1ba97e86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a422f1d385ca571cb740e65a1a5a2cfc09c91b14b20f9318331a0682ff253f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.173399 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7796bae1-68a7-44b4-98cc-0dd83da754bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb014f2eff31e4759608dbaf2026eed34d01f659af0ffe0f19f238155d3c5d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j4drh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9pllw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.184893 4796 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d158ce1c-6415-4e69-a1fe-862330b25ff3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-05T10:28:40Z\\\",\\\"message\\\":\\\"rg/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.174\\\\\\\", Port:9393, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1205 10:28:40.644678 6793 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-05T10:28:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-xvb5x_openshift-ovn-kubernetes(d158ce1c-6415-4e69-a1fe-862330b25ff3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-05T10:27:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-05T10:27:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-05T10:27:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22cc2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-05T10:27:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-xvb5x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:14Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.233096 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.233123 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.233131 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.233141 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.233148 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:14Z","lastTransitionTime":"2025-12-05T10:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.335080 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.335108 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.335116 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.335125 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.335133 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:14Z","lastTransitionTime":"2025-12-05T10:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.436233 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.436266 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.436275 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.436287 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.436297 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:14Z","lastTransitionTime":"2025-12-05T10:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.537420 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.537449 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.537457 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.537468 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.537486 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:14Z","lastTransitionTime":"2025-12-05T10:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.639242 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.639273 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.639281 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.639290 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.639299 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:14Z","lastTransitionTime":"2025-12-05T10:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.740907 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.740934 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.740942 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.740965 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.740974 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:14Z","lastTransitionTime":"2025-12-05T10:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.842178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.842215 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.842225 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.842242 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.842253 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:14Z","lastTransitionTime":"2025-12-05T10:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.944431 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.944463 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.944471 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.944492 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:14 crc kubenswrapper[4796]: I1205 10:29:14.944500 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:14Z","lastTransitionTime":"2025-12-05T10:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.030331 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.030362 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.030386 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:29:15 crc kubenswrapper[4796]: E1205 10:29:15.030420 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.030453 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:29:15 crc kubenswrapper[4796]: E1205 10:29:15.030574 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:29:15 crc kubenswrapper[4796]: E1205 10:29:15.030594 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:29:15 crc kubenswrapper[4796]: E1205 10:29:15.030733 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.046856 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.046891 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.046900 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.046915 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.046926 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:15Z","lastTransitionTime":"2025-12-05T10:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.148603 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.148633 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.148643 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.148652 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.148661 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:15Z","lastTransitionTime":"2025-12-05T10:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.250599 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.250628 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.250637 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.250648 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.250656 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:15Z","lastTransitionTime":"2025-12-05T10:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.353047 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.353162 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.353237 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.353308 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.353371 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:15Z","lastTransitionTime":"2025-12-05T10:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.455044 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.455152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.455222 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.455280 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.455344 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:15Z","lastTransitionTime":"2025-12-05T10:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.556962 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.556990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.557000 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.557011 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.557019 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:15Z","lastTransitionTime":"2025-12-05T10:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.659168 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.659206 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.659214 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.659227 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.659235 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:15Z","lastTransitionTime":"2025-12-05T10:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.760965 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.760996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.761005 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.761017 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.761026 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:15Z","lastTransitionTime":"2025-12-05T10:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.849463 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.849589 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.849653 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.849741 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.849808 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:15Z","lastTransitionTime":"2025-12-05T10:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:15 crc kubenswrapper[4796]: E1205 10:29:15.858427 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:15Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.860791 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.860840 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.860850 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.860861 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.860868 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:15Z","lastTransitionTime":"2025-12-05T10:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:15 crc kubenswrapper[4796]: E1205 10:29:15.868424 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:15Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.870834 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.870869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.870877 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.870886 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.870895 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:15Z","lastTransitionTime":"2025-12-05T10:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:15 crc kubenswrapper[4796]: E1205 10:29:15.878033 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:15Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.880132 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.880155 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.880163 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.880172 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.880179 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:15Z","lastTransitionTime":"2025-12-05T10:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:15 crc kubenswrapper[4796]: E1205 10:29:15.887268 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:15Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.889500 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.889598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.889657 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.889764 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.889824 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:15Z","lastTransitionTime":"2025-12-05T10:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:15 crc kubenswrapper[4796]: E1205 10:29:15.897960 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-05T10:29:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"461f0aa3-0c3e-46e8-8138-7f8b2360aec8\\\",\\\"systemUUID\\\":\\\"bd1699b9-79b9-439b-a76e-17d5109bc482\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-05T10:29:15Z is after 2025-08-24T17:21:41Z" Dec 05 10:29:15 crc kubenswrapper[4796]: E1205 10:29:15.898188 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.899122 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.899151 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.899159 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.899170 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:15 crc kubenswrapper[4796]: I1205 10:29:15.899193 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:15Z","lastTransitionTime":"2025-12-05T10:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.001065 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.001101 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.001110 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.001123 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.001131 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:16Z","lastTransitionTime":"2025-12-05T10:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.102935 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.102963 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.102972 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.102982 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.103008 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:16Z","lastTransitionTime":"2025-12-05T10:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.205147 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.205275 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.205360 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.205435 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.205509 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:16Z","lastTransitionTime":"2025-12-05T10:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.307711 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.307739 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.307746 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.307755 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.307764 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:16Z","lastTransitionTime":"2025-12-05T10:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.408902 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.408937 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.408945 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.408957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.408966 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:16Z","lastTransitionTime":"2025-12-05T10:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.510758 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.510791 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.510804 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.510818 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.510826 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:16Z","lastTransitionTime":"2025-12-05T10:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.611910 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.611952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.611962 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.611974 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.611983 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:16Z","lastTransitionTime":"2025-12-05T10:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.713720 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.713767 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.713776 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.713787 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.713795 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:16Z","lastTransitionTime":"2025-12-05T10:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.815117 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.815174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.815186 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.815197 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.815205 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:16Z","lastTransitionTime":"2025-12-05T10:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.916766 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.916802 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.916810 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.916823 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:16 crc kubenswrapper[4796]: I1205 10:29:16.916831 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:16Z","lastTransitionTime":"2025-12-05T10:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.018084 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.018120 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.018139 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.018152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.018163 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:17Z","lastTransitionTime":"2025-12-05T10:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.030085 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.030106 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.030109 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.030139 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:29:17 crc kubenswrapper[4796]: E1205 10:29:17.030223 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:29:17 crc kubenswrapper[4796]: E1205 10:29:17.030277 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:29:17 crc kubenswrapper[4796]: E1205 10:29:17.030334 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:29:17 crc kubenswrapper[4796]: E1205 10:29:17.030400 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.119967 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.119995 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.120004 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.120014 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.120022 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:17Z","lastTransitionTime":"2025-12-05T10:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.221663 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.221732 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.221745 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.221762 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.221772 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:17Z","lastTransitionTime":"2025-12-05T10:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.324305 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.324334 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.324342 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.324352 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.324361 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:17Z","lastTransitionTime":"2025-12-05T10:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.425618 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.425643 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.425651 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.425660 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.425668 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:17Z","lastTransitionTime":"2025-12-05T10:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.527340 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.527364 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.527372 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.527382 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.527388 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:17Z","lastTransitionTime":"2025-12-05T10:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.629403 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.629433 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.629440 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.629451 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.629458 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:17Z","lastTransitionTime":"2025-12-05T10:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.730851 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.730884 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.730895 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.730906 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.730915 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:17Z","lastTransitionTime":"2025-12-05T10:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.832576 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.832605 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.832612 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.832622 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.832628 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:17Z","lastTransitionTime":"2025-12-05T10:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.934129 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.934172 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.934185 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.934202 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:17 crc kubenswrapper[4796]: I1205 10:29:17.934214 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:17Z","lastTransitionTime":"2025-12-05T10:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.036004 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.036030 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.036038 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.036049 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.036056 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:18Z","lastTransitionTime":"2025-12-05T10:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.137717 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.137745 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.137754 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.137766 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.137774 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:18Z","lastTransitionTime":"2025-12-05T10:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.239810 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.239844 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.239852 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.239863 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.239871 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:18Z","lastTransitionTime":"2025-12-05T10:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.342336 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.342374 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.342383 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.342399 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.342407 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:18Z","lastTransitionTime":"2025-12-05T10:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.444112 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.444144 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.444152 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.444165 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.444172 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:18Z","lastTransitionTime":"2025-12-05T10:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.545996 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.546024 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.546033 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.546044 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.546051 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:18Z","lastTransitionTime":"2025-12-05T10:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.647751 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.647791 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.647799 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.647813 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.647821 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:18Z","lastTransitionTime":"2025-12-05T10:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.749612 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.749647 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.749654 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.749667 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.749676 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:18Z","lastTransitionTime":"2025-12-05T10:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.851980 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.852016 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.852026 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.852039 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.852054 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:18Z","lastTransitionTime":"2025-12-05T10:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.953693 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.953732 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.953741 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.953755 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:18 crc kubenswrapper[4796]: I1205 10:29:18.953763 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:18Z","lastTransitionTime":"2025-12-05T10:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.030773 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.030817 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.030839 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.030871 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:29:19 crc kubenswrapper[4796]: E1205 10:29:19.030992 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:29:19 crc kubenswrapper[4796]: E1205 10:29:19.031171 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:29:19 crc kubenswrapper[4796]: E1205 10:29:19.031254 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:29:19 crc kubenswrapper[4796]: E1205 10:29:19.031322 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.055307 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.055329 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.055336 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.055344 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.055351 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:19Z","lastTransitionTime":"2025-12-05T10:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.157224 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.157264 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.157274 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.157289 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.157301 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:19Z","lastTransitionTime":"2025-12-05T10:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.259370 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.259424 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.259448 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.259461 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.259478 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:19Z","lastTransitionTime":"2025-12-05T10:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.361849 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.361898 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.361907 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.361919 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.361927 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:19Z","lastTransitionTime":"2025-12-05T10:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.463565 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.463596 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.463605 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.463630 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.463639 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:19Z","lastTransitionTime":"2025-12-05T10:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.565625 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.565671 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.565679 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.565711 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.565722 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:19Z","lastTransitionTime":"2025-12-05T10:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.668311 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.668347 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.668355 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.668366 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.668373 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:19Z","lastTransitionTime":"2025-12-05T10:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.770429 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.770456 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.770465 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.770486 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.770494 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:19Z","lastTransitionTime":"2025-12-05T10:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.871872 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.871902 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.871911 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.871921 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.871927 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:19Z","lastTransitionTime":"2025-12-05T10:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.973502 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.973551 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.973561 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.973573 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:19 crc kubenswrapper[4796]: I1205 10:29:19.973581 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:19Z","lastTransitionTime":"2025-12-05T10:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.075832 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.075860 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.075868 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.075878 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.075886 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:20Z","lastTransitionTime":"2025-12-05T10:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.178525 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.178554 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.178564 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.178576 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.178585 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:20Z","lastTransitionTime":"2025-12-05T10:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.280380 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.280415 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.280425 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.280437 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.280445 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:20Z","lastTransitionTime":"2025-12-05T10:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.382414 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.382450 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.382458 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.382479 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.382488 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:20Z","lastTransitionTime":"2025-12-05T10:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.484398 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.484427 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.484435 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.484446 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.484454 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:20Z","lastTransitionTime":"2025-12-05T10:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.586508 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.586538 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.586548 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.586560 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.586568 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:20Z","lastTransitionTime":"2025-12-05T10:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.688000 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.688040 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.688048 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.688061 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.688069 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:20Z","lastTransitionTime":"2025-12-05T10:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.789422 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.789455 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.789462 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.789484 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.789493 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:20Z","lastTransitionTime":"2025-12-05T10:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.891502 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.891539 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.891548 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.891561 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.891571 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:20Z","lastTransitionTime":"2025-12-05T10:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.993439 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.993465 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.993489 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.993500 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:20 crc kubenswrapper[4796]: I1205 10:29:20.993508 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:20Z","lastTransitionTime":"2025-12-05T10:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.030999 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.031013 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.031047 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:29:21 crc kubenswrapper[4796]: E1205 10:29:21.031082 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:29:21 crc kubenswrapper[4796]: E1205 10:29:21.031167 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.031174 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:29:21 crc kubenswrapper[4796]: E1205 10:29:21.031210 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:29:21 crc kubenswrapper[4796]: E1205 10:29:21.031253 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.095552 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.095573 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.095583 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.095593 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.095601 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:21Z","lastTransitionTime":"2025-12-05T10:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.197280 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.197302 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.197311 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.197320 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.197328 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:21Z","lastTransitionTime":"2025-12-05T10:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.298589 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.298624 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.298633 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.298644 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.298652 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:21Z","lastTransitionTime":"2025-12-05T10:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.400310 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.400345 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.400354 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.400367 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.400376 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:21Z","lastTransitionTime":"2025-12-05T10:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.502270 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.502319 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.502333 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.502346 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.502354 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:21Z","lastTransitionTime":"2025-12-05T10:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.604334 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.604375 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.604385 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.604398 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.604408 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:21Z","lastTransitionTime":"2025-12-05T10:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.706428 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.706479 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.706487 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.706501 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.706513 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:21Z","lastTransitionTime":"2025-12-05T10:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.807875 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.807909 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.807917 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.807928 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.807937 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:21Z","lastTransitionTime":"2025-12-05T10:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.909948 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.909985 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.909993 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.910006 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:21 crc kubenswrapper[4796]: I1205 10:29:21.910017 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:21Z","lastTransitionTime":"2025-12-05T10:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.013164 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.013196 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.013204 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.013217 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.013224 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:22Z","lastTransitionTime":"2025-12-05T10:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.115178 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.115213 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.115222 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.115233 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.115242 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:22Z","lastTransitionTime":"2025-12-05T10:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.217084 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.217131 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.217140 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.217156 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.217166 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:22Z","lastTransitionTime":"2025-12-05T10:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.319009 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.319039 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.319047 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.319073 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.319082 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:22Z","lastTransitionTime":"2025-12-05T10:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.421176 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.421215 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.421223 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.421236 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.421269 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:22Z","lastTransitionTime":"2025-12-05T10:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.523132 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.523174 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.523183 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.523195 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.523203 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:22Z","lastTransitionTime":"2025-12-05T10:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.624963 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.624998 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.625006 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.625019 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.625026 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:22Z","lastTransitionTime":"2025-12-05T10:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.726963 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.726997 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.727006 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.727023 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.727033 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:22Z","lastTransitionTime":"2025-12-05T10:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.828532 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.828570 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.828578 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.828593 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.828602 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:22Z","lastTransitionTime":"2025-12-05T10:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.930281 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.930310 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.930318 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.930331 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:22 crc kubenswrapper[4796]: I1205 10:29:22.930340 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:22Z","lastTransitionTime":"2025-12-05T10:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.030428 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.030452 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.030452 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.030507 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:29:23 crc kubenswrapper[4796]: E1205 10:29:23.030550 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:29:23 crc kubenswrapper[4796]: E1205 10:29:23.030646 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:29:23 crc kubenswrapper[4796]: E1205 10:29:23.030734 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:29:23 crc kubenswrapper[4796]: E1205 10:29:23.030777 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.031655 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.031679 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.031702 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.031713 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.031721 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:23Z","lastTransitionTime":"2025-12-05T10:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.133343 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.133398 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.133408 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.133426 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.133438 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:23Z","lastTransitionTime":"2025-12-05T10:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.235205 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.235236 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.235243 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.235253 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.235262 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:23Z","lastTransitionTime":"2025-12-05T10:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.337089 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.337126 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.337138 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.337150 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.337158 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:23Z","lastTransitionTime":"2025-12-05T10:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.439165 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.439193 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.439202 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.439214 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.439223 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:23Z","lastTransitionTime":"2025-12-05T10:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.541186 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.541237 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.541246 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.541259 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.541268 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:23Z","lastTransitionTime":"2025-12-05T10:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.643021 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.643062 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.643070 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.643082 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.643091 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:23Z","lastTransitionTime":"2025-12-05T10:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.744920 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.744957 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.744967 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.744981 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.744990 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:23Z","lastTransitionTime":"2025-12-05T10:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.846807 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.846835 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.846843 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.846854 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.846861 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:23Z","lastTransitionTime":"2025-12-05T10:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.948833 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.948864 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.948873 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.948885 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:23 crc kubenswrapper[4796]: I1205 10:29:23.948893 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:23Z","lastTransitionTime":"2025-12-05T10:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.030845 4796 scope.go:117] "RemoveContainer" containerID="428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.051378 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.051530 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.051540 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.051553 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.051563 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:24Z","lastTransitionTime":"2025-12-05T10:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.056261 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=30.056248995 podStartE2EDuration="30.056248995s" podCreationTimestamp="2025-12-05 10:28:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:24.044904855 +0000 UTC m=+110.333010369" watchObservedRunningTime="2025-12-05 10:29:24.056248995 +0000 UTC m=+110.344354508" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.066438 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=93.066427944 podStartE2EDuration="1m33.066427944s" podCreationTimestamp="2025-12-05 10:27:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:24.056483655 +0000 UTC m=+110.344589169" watchObservedRunningTime="2025-12-05 10:29:24.066427944 +0000 UTC m=+110.354533457" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.100154 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podStartSLOduration=89.100138597 podStartE2EDuration="1m29.100138597s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:24.099558337 +0000 UTC m=+110.387663849" watchObservedRunningTime="2025-12-05 10:29:24.100138597 +0000 UTC m=+110.388244110" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.141628 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-tz2w5" podStartSLOduration=89.141613441 podStartE2EDuration="1m29.141613441s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:24.141183714 +0000 UTC m=+110.429289227" watchObservedRunningTime="2025-12-05 10:29:24.141613441 +0000 UTC m=+110.429718954" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.152990 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.153022 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.153030 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.153042 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.153051 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:24Z","lastTransitionTime":"2025-12-05T10:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.170645 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-cqj7h" podStartSLOduration=89.170630033 podStartE2EDuration="1m29.170630033s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:24.153445237 +0000 UTC m=+110.441550750" watchObservedRunningTime="2025-12-05 10:29:24.170630033 +0000 UTC m=+110.458735546" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.170804 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ct8sh" podStartSLOduration=89.170800323 podStartE2EDuration="1m29.170800323s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:24.170447289 +0000 UTC m=+110.458552802" watchObservedRunningTime="2025-12-05 10:29:24.170800323 +0000 UTC m=+110.458905836" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.186280 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-sbxd4" podStartSLOduration=89.18626703 podStartE2EDuration="1m29.18626703s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:24.179306107 +0000 UTC m=+110.467411621" watchObservedRunningTime="2025-12-05 10:29:24.18626703 +0000 UTC m=+110.474372543" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.203022 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2khms" podStartSLOduration=89.203007391 podStartE2EDuration="1m29.203007391s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:24.186426138 +0000 UTC m=+110.474531661" watchObservedRunningTime="2025-12-05 10:29:24.203007391 +0000 UTC m=+110.491112903" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.203284 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=93.203280093 podStartE2EDuration="1m33.203280093s" podCreationTimestamp="2025-12-05 10:27:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:24.203145459 +0000 UTC m=+110.491250982" watchObservedRunningTime="2025-12-05 10:29:24.203280093 +0000 UTC m=+110.491385606" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.213970 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=93.213956657 podStartE2EDuration="1m33.213956657s" podCreationTimestamp="2025-12-05 10:27:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:24.213445697 +0000 UTC m=+110.501551210" watchObservedRunningTime="2025-12-05 10:29:24.213956657 +0000 UTC m=+110.502062170" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.222560 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=57.22255174 podStartE2EDuration="57.22255174s" podCreationTimestamp="2025-12-05 10:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:24.222078961 +0000 UTC m=+110.510184475" watchObservedRunningTime="2025-12-05 10:29:24.22255174 +0000 UTC m=+110.510657253" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.254516 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.254566 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.254578 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.254595 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.254605 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:24Z","lastTransitionTime":"2025-12-05T10:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.356327 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.356362 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.356375 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.356387 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.356396 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:24Z","lastTransitionTime":"2025-12-05T10:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.404251 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xvb5x_d158ce1c-6415-4e69-a1fe-862330b25ff3/ovnkube-controller/3.log" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.406480 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerStarted","Data":"947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a"} Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.406844 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.428177 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" podStartSLOduration=89.428165704 podStartE2EDuration="1m29.428165704s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:24.425723244 +0000 UTC m=+110.713828757" watchObservedRunningTime="2025-12-05 10:29:24.428165704 +0000 UTC m=+110.716271217" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.458583 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.458610 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.458618 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.458629 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.458638 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:24Z","lastTransitionTime":"2025-12-05T10:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.560310 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.560349 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.560359 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.560373 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.560383 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:24Z","lastTransitionTime":"2025-12-05T10:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.648643 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-sqdfm"] Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.648769 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:29:24 crc kubenswrapper[4796]: E1205 10:29:24.648850 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.662308 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.662355 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.662365 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.662380 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.662398 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:24Z","lastTransitionTime":"2025-12-05T10:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.763956 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.764166 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.764175 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.764191 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.764199 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:24Z","lastTransitionTime":"2025-12-05T10:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.866137 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.866172 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.866180 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.866193 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.866202 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:24Z","lastTransitionTime":"2025-12-05T10:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.968019 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.968058 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.968066 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.968079 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:24 crc kubenswrapper[4796]: I1205 10:29:24.968087 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:24Z","lastTransitionTime":"2025-12-05T10:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.030710 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.030756 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.030711 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:29:25 crc kubenswrapper[4796]: E1205 10:29:25.030832 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:29:25 crc kubenswrapper[4796]: E1205 10:29:25.030870 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:29:25 crc kubenswrapper[4796]: E1205 10:29:25.030941 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.069452 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.069487 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.069496 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.069507 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.069515 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:25Z","lastTransitionTime":"2025-12-05T10:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.171262 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.171302 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.171313 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.171327 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.171337 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:25Z","lastTransitionTime":"2025-12-05T10:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.272988 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.273019 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.273027 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.273039 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.273046 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:25Z","lastTransitionTime":"2025-12-05T10:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.374994 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.375036 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.375046 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.375062 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.375071 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:25Z","lastTransitionTime":"2025-12-05T10:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.476897 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.476943 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.476952 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.476964 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.476972 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:25Z","lastTransitionTime":"2025-12-05T10:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.578869 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.578902 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.578910 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.578922 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.578931 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:25Z","lastTransitionTime":"2025-12-05T10:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.680801 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.680841 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.680848 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.680860 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.680870 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:25Z","lastTransitionTime":"2025-12-05T10:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.782586 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.782615 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.782625 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.782637 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.782645 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:25Z","lastTransitionTime":"2025-12-05T10:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.884553 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.884590 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.884598 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.884631 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.884640 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:25Z","lastTransitionTime":"2025-12-05T10:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.986533 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.986566 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.986575 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.986585 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:25 crc kubenswrapper[4796]: I1205 10:29:25.986593 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:25Z","lastTransitionTime":"2025-12-05T10:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.030979 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:29:26 crc kubenswrapper[4796]: E1205 10:29:26.031093 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sqdfm" podUID="dcf780ba-edff-45ee-88e9-5b99e4d0e458" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.088197 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.088237 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.088245 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.088259 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.088267 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:26Z","lastTransitionTime":"2025-12-05T10:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.190324 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.190369 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.190379 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.190394 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.190403 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:26Z","lastTransitionTime":"2025-12-05T10:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.191264 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.191301 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.191317 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.191331 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.191341 4796 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-05T10:29:26Z","lastTransitionTime":"2025-12-05T10:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.220521 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rmk7f"] Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.220847 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rmk7f" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.222181 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.222579 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.222590 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.222618 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.375420 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8ffd064-275e-4b64-9e17-a5c8c8a8220d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rmk7f\" (UID: \"b8ffd064-275e-4b64-9e17-a5c8c8a8220d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rmk7f" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.375453 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8ffd064-275e-4b64-9e17-a5c8c8a8220d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rmk7f\" (UID: \"b8ffd064-275e-4b64-9e17-a5c8c8a8220d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rmk7f" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.375484 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b8ffd064-275e-4b64-9e17-a5c8c8a8220d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rmk7f\" (UID: \"b8ffd064-275e-4b64-9e17-a5c8c8a8220d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rmk7f" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.375507 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8ffd064-275e-4b64-9e17-a5c8c8a8220d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rmk7f\" (UID: \"b8ffd064-275e-4b64-9e17-a5c8c8a8220d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rmk7f" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.375646 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b8ffd064-275e-4b64-9e17-a5c8c8a8220d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rmk7f\" (UID: \"b8ffd064-275e-4b64-9e17-a5c8c8a8220d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rmk7f" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.477080 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b8ffd064-275e-4b64-9e17-a5c8c8a8220d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rmk7f\" (UID: \"b8ffd064-275e-4b64-9e17-a5c8c8a8220d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rmk7f" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.477130 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8ffd064-275e-4b64-9e17-a5c8c8a8220d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rmk7f\" (UID: \"b8ffd064-275e-4b64-9e17-a5c8c8a8220d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rmk7f" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.477156 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b8ffd064-275e-4b64-9e17-a5c8c8a8220d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rmk7f\" (UID: \"b8ffd064-275e-4b64-9e17-a5c8c8a8220d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rmk7f" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.477171 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8ffd064-275e-4b64-9e17-a5c8c8a8220d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rmk7f\" (UID: \"b8ffd064-275e-4b64-9e17-a5c8c8a8220d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rmk7f" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.477196 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8ffd064-275e-4b64-9e17-a5c8c8a8220d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rmk7f\" (UID: \"b8ffd064-275e-4b64-9e17-a5c8c8a8220d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rmk7f" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.477210 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b8ffd064-275e-4b64-9e17-a5c8c8a8220d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rmk7f\" (UID: \"b8ffd064-275e-4b64-9e17-a5c8c8a8220d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rmk7f" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.477262 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b8ffd064-275e-4b64-9e17-a5c8c8a8220d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rmk7f\" (UID: \"b8ffd064-275e-4b64-9e17-a5c8c8a8220d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rmk7f" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.477942 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8ffd064-275e-4b64-9e17-a5c8c8a8220d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rmk7f\" (UID: \"b8ffd064-275e-4b64-9e17-a5c8c8a8220d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rmk7f" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.481614 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8ffd064-275e-4b64-9e17-a5c8c8a8220d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rmk7f\" (UID: \"b8ffd064-275e-4b64-9e17-a5c8c8a8220d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rmk7f" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.490111 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b8ffd064-275e-4b64-9e17-a5c8c8a8220d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rmk7f\" (UID: \"b8ffd064-275e-4b64-9e17-a5c8c8a8220d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rmk7f" Dec 05 10:29:26 crc kubenswrapper[4796]: I1205 10:29:26.530976 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rmk7f" Dec 05 10:29:26 crc kubenswrapper[4796]: W1205 10:29:26.540940 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8ffd064_275e_4b64_9e17_a5c8c8a8220d.slice/crio-da737f3e97ab81e03c11d5384a80a44927566531e92bba725e5205865dfa45e4 WatchSource:0}: Error finding container da737f3e97ab81e03c11d5384a80a44927566531e92bba725e5205865dfa45e4: Status 404 returned error can't find the container with id da737f3e97ab81e03c11d5384a80a44927566531e92bba725e5205865dfa45e4 Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.030623 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.030634 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.030871 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:29:27 crc kubenswrapper[4796]: E1205 10:29:27.030933 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 05 10:29:27 crc kubenswrapper[4796]: E1205 10:29:27.031004 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 05 10:29:27 crc kubenswrapper[4796]: E1205 10:29:27.031061 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.107249 4796 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.107347 4796 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.129815 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v4phj"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.130219 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.132913 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.133084 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.133714 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.133858 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.134084 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.134725 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.134981 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.135643 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n42q4"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.136006 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n42q4" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.136120 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-j7krr"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.136358 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.136666 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-chjp6"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.136890 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-chjp6" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.137760 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qvpdg"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.138744 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.138820 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.139086 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.139112 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.139140 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.139168 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.139185 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.139239 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.139424 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.139497 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.139602 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.139637 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.149785 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.152014 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.152948 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.153316 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.153409 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.153440 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.157501 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.157990 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-42nbd"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.158102 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.158336 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.158427 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.158456 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.158588 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6v7lm"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.158706 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.158734 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.158827 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8hb86"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.158841 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.158911 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.158956 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.158973 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.159060 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42nbd" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.159124 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2tpfm"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.159224 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8hb86" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.159245 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6v7lm" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.159880 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-t4hjr"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.159945 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.160118 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.160345 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.160490 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.160354 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2tpfm" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.160901 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pzjgw"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.160942 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.161050 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.161344 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.161365 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.161346 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nxq92"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.161369 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pzjgw" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.161523 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.161556 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.161625 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.161864 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nxq92" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.163034 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.164010 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.167530 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-zsscn"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.167927 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxz98"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.168156 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-54wv6"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.168746 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-54wv6" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.168993 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zsscn" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.169244 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.169877 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxz98" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.169988 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.170149 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.170335 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.173426 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lwbh6"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.173909 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwbh6" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.174054 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2f4ks"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.174332 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2f4ks" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.176174 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jn5vx"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.176563 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5k24"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.176795 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-k9j8l"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.177064 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.177222 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jn5vx" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.177347 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5k24" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.177509 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.177915 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.184250 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.184731 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.184905 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.185311 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.185453 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.185599 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.185757 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.185896 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.186279 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.186335 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.186496 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.186519 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.186641 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.186808 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.186992 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.187231 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.187498 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.187915 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.187992 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.188382 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.188585 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.188675 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.188905 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.188955 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.189113 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.189374 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.188957 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.190658 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-jfp56"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.199672 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.199797 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.200026 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.200861 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.200929 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.200983 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z2zft"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.201009 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.201196 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.201290 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.199792 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.201414 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-z2zft" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.201433 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jfp56" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.201730 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.201893 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wzr5n"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.201980 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.202168 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.202218 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.202274 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.202315 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.202349 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wzr5n" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.202517 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.202552 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.202697 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l6k4p"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.202980 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.203055 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.203285 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l6k4p" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.204048 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gw4sz"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.204814 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gw4sz" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.206800 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.207678 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.207694 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.211987 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.212142 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzkjl"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.212716 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzkjl" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.213379 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-j7krr"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.214043 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dktck"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.214443 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dktck" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.216725 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.218360 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v4phj"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.218512 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.218597 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-w7mdv"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.218974 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.219112 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-w7mdv" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.219279 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r45ng"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.219631 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r45ng" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.222238 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blx8z"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.225986 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blx8z" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.226133 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5b8dg"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.226756 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5b8dg" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.229246 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gmlb4"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.231629 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.233008 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6ftq2"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.233169 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gmlb4" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.235553 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.235589 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-chjp6"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.235656 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6ftq2" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.236291 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-t4hjr"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.237755 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ngdm4"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.238912 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ngdm4" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.239543 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415495-lj8xl"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.239891 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415495-lj8xl" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.240668 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-42nbd"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.241789 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qvpdg"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.242549 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lxxz8"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.242918 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxxz8" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.243264 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6v7lm"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.244086 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2f4ks"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.245299 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxz98"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.246073 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-9fpsf"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.246695 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9fpsf" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.252222 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.252480 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n42q4"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.254503 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzkjl"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.256337 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.256815 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nxq92"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.257545 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z2zft"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.260524 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lwbh6"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.262378 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pzjgw"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.263307 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-k9j8l"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.264752 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gmlb4"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.265663 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8hb86"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.266897 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jn5vx"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.267782 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5k24"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.268907 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2tpfm"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.269831 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gw4sz"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.271511 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.271702 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wzr5n"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.272514 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l6k4p"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.273333 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ngdm4"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.274134 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jfp56"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.274945 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r45ng"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.275793 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dktck"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.276571 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-w7mdv"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.277607 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415495-lj8xl"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.278479 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blx8z"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.280253 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lxxz8"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.281108 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5b8dg"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.282122 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9fpsf"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.282942 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6ftq2"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.284208 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n87bt"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.285095 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-n87bt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.285478 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-hktfd"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.285983 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hktfd" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.286438 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hktfd"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.287377 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n87bt"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.291976 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.311517 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.332003 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.346401 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2mpns"] Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.346985 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2mpns" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.351528 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.371223 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.391535 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.411477 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.413188 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rmk7f" event={"ID":"b8ffd064-275e-4b64-9e17-a5c8c8a8220d","Type":"ContainerStarted","Data":"8c9f341bf4286decb23a3e259a8863f93279861b93e44444d6582e7b19470a52"} Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.413226 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rmk7f" event={"ID":"b8ffd064-275e-4b64-9e17-a5c8c8a8220d","Type":"ContainerStarted","Data":"da737f3e97ab81e03c11d5384a80a44927566531e92bba725e5205865dfa45e4"} Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.431702 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.451579 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.472253 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.491494 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.511888 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.532239 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.552457 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.576924 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.592142 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.611398 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.631751 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.651790 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.672216 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.692119 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.711705 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.731643 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.756844 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.772549 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.791778 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.812430 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.831549 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.851857 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.891505 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.912073 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.931442 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.951508 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.972193 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 05 10:29:27 crc kubenswrapper[4796]: I1205 10:29:27.991973 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.012492 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.031216 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.031792 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.051936 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.071801 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.092798 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.131761 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.152406 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.171891 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.192380 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.210724 4796 request.go:700] Waited for 1.005712402s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-98p87&limit=500&resourceVersion=0 Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.211710 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.231851 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.251878 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.271284 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.291594 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.312734 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.332026 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.352138 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.371846 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.392422 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.411406 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.431715 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.451855 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.472651 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.492055 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.511720 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.532255 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.551384 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.572244 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.591794 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.617118 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.631872 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.651814 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.672413 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.692379 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.711755 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.732029 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.752091 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.771333 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.791857 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.812261 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.831481 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.852518 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.871778 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.892281 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.911482 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.931810 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.951972 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.971897 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 05 10:29:28 crc kubenswrapper[4796]: I1205 10:29:28.991802 4796 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.011778 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.030339 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.030379 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.030580 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.031871 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.051374 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.071828 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.092151 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.111828 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.132627 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.151619 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.172034 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.211339 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214342 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b49dp\" (UniqueName: \"kubernetes.io/projected/82c1511d-7bf8-4245-9593-f2de287b9dd6-kube-api-access-b49dp\") pod \"machine-approver-56656f9798-54wv6\" (UID: \"82c1511d-7bf8-4245-9593-f2de287b9dd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-54wv6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214369 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/93a93c24-45ae-43e1-9349-7471c6a218f8-encryption-config\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214384 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7b44e76-07c3-4876-965a-d7fe4c3c8e41-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-chjp6\" (UID: \"f7b44e76-07c3-4876-965a-d7fe4c3c8e41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-chjp6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214398 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7b44e76-07c3-4876-965a-d7fe4c3c8e41-service-ca-bundle\") pod \"authentication-operator-69f744f599-chjp6\" (UID: \"f7b44e76-07c3-4876-965a-d7fe4c3c8e41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-chjp6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214413 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214429 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93a93c24-45ae-43e1-9349-7471c6a218f8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214474 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-config\") pod \"route-controller-manager-6576b87f9c-g6ljh\" (UID: \"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214498 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdtgs\" (UniqueName: \"kubernetes.io/projected/56729699-46b2-454c-83d5-9dce9d90ac49-kube-api-access-sdtgs\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214543 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/93a93c24-45ae-43e1-9349-7471c6a218f8-node-pullsecrets\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214573 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/93a93c24-45ae-43e1-9349-7471c6a218f8-etcd-client\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214592 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnt4q\" (UniqueName: \"kubernetes.io/projected/e579b64c-823d-4798-81c7-52c515a4f9f1-kube-api-access-hnt4q\") pod \"cluster-samples-operator-665b6dd947-2tpfm\" (UID: \"e579b64c-823d-4798-81c7-52c515a4f9f1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2tpfm" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214621 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fd8fbf42-d1ee-4ca8-bb56-491c068bbf4a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lwbh6\" (UID: \"fd8fbf42-d1ee-4ca8-bb56-491c068bbf4a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwbh6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214641 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1aadfdc4-1cb5-432f-ab33-81df64ceb763-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-n42q4\" (UID: \"1aadfdc4-1cb5-432f-ab33-81df64ceb763\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n42q4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214716 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0677773-5515-4b74-9975-75dc72e6a127-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-j7krr\" (UID: \"a0677773-5515-4b74-9975-75dc72e6a127\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214739 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20e67d3a-290d-4ac2-b3ef-69330f127f52-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214788 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ggtl\" (UniqueName: \"kubernetes.io/projected/8c481089-1d82-466f-b015-541a729f07b7-kube-api-access-8ggtl\") pod \"router-default-5444994796-zsscn\" (UID: \"8c481089-1d82-466f-b015-541a729f07b7\") " pod="openshift-ingress/router-default-5444994796-zsscn" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214819 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b5fec51d-bef3-426c-ba74-90a48a94d9ce-ca-trust-extracted\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214838 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-serving-cert\") pod \"route-controller-manager-6576b87f9c-g6ljh\" (UID: \"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214853 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8c481089-1d82-466f-b015-541a729f07b7-default-certificate\") pod \"router-default-5444994796-zsscn\" (UID: \"8c481089-1d82-466f-b015-541a729f07b7\") " pod="openshift-ingress/router-default-5444994796-zsscn" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214869 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5fec51d-bef3-426c-ba74-90a48a94d9ce-bound-sa-token\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214887 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56729699-46b2-454c-83d5-9dce9d90ac49-audit-dir\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214906 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/20e67d3a-290d-4ac2-b3ef-69330f127f52-encryption-config\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214920 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20e67d3a-290d-4ac2-b3ef-69330f127f52-audit-dir\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214933 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c481089-1d82-466f-b015-541a729f07b7-service-ca-bundle\") pod \"router-default-5444994796-zsscn\" (UID: \"8c481089-1d82-466f-b015-541a729f07b7\") " pod="openshift-ingress/router-default-5444994796-zsscn" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214947 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/72c37509-1c20-49e9-9a41-49a79f90a2e2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-42nbd\" (UID: \"72c37509-1c20-49e9-9a41-49a79f90a2e2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42nbd" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.214991 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215021 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b8469b6-c8dd-4b39-81cb-ab35657cdfb5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2f4ks\" (UID: \"4b8469b6-c8dd-4b39-81cb-ab35657cdfb5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2f4ks" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215038 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b8469b6-c8dd-4b39-81cb-ab35657cdfb5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2f4ks\" (UID: \"4b8469b6-c8dd-4b39-81cb-ab35657cdfb5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2f4ks" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215060 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da-trusted-ca\") pod \"ingress-operator-5b745b69d9-jn5vx\" (UID: \"ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jn5vx" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215079 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pww42\" (UniqueName: \"kubernetes.io/projected/ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da-kube-api-access-pww42\") pod \"ingress-operator-5b745b69d9-jn5vx\" (UID: \"ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jn5vx" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215108 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jh9f\" (UniqueName: \"kubernetes.io/projected/a0677773-5515-4b74-9975-75dc72e6a127-kube-api-access-6jh9f\") pod \"controller-manager-879f6c89f-j7krr\" (UID: \"a0677773-5515-4b74-9975-75dc72e6a127\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215124 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215139 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dddee8f-f24e-4777-ba27-418ef5ef30c5-serving-cert\") pod \"etcd-operator-b45778765-pzjgw\" (UID: \"3dddee8f-f24e-4777-ba27-418ef5ef30c5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzjgw" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215157 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215172 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93a93c24-45ae-43e1-9349-7471c6a218f8-serving-cert\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215185 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7b44e76-07c3-4876-965a-d7fe4c3c8e41-config\") pod \"authentication-operator-69f744f599-chjp6\" (UID: \"f7b44e76-07c3-4876-965a-d7fe4c3c8e41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-chjp6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215201 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215218 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0677773-5515-4b74-9975-75dc72e6a127-serving-cert\") pod \"controller-manager-879f6c89f-j7krr\" (UID: \"a0677773-5515-4b74-9975-75dc72e6a127\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215241 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/82c1511d-7bf8-4245-9593-f2de287b9dd6-machine-approver-tls\") pod \"machine-approver-56656f9798-54wv6\" (UID: \"82c1511d-7bf8-4245-9593-f2de287b9dd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-54wv6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215256 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9kmm\" (UniqueName: \"kubernetes.io/projected/dc21f1de-fae3-4d9a-862e-e33257b4f177-kube-api-access-g9kmm\") pod \"downloads-7954f5f757-jfp56\" (UID: \"dc21f1de-fae3-4d9a-862e-e33257b4f177\") " pod="openshift-console/downloads-7954f5f757-jfp56" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215272 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5fec51d-bef3-426c-ba74-90a48a94d9ce-trusted-ca\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215286 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5886f6e5-57aa-4e8d-adb5-a0cb9f1648db-trusted-ca\") pod \"console-operator-58897d9998-6v7lm\" (UID: \"5886f6e5-57aa-4e8d-adb5-a0cb9f1648db\") " pod="openshift-console-operator/console-operator-58897d9998-6v7lm" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215299 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72c37509-1c20-49e9-9a41-49a79f90a2e2-serving-cert\") pod \"openshift-config-operator-7777fb866f-42nbd\" (UID: \"72c37509-1c20-49e9-9a41-49a79f90a2e2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42nbd" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215312 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq646\" (UniqueName: \"kubernetes.io/projected/1b50e1f6-6ac8-4672-a14e-6fe4a6d3b4be-kube-api-access-mq646\") pod \"dns-operator-744455d44c-8hb86\" (UID: \"1b50e1f6-6ac8-4672-a14e-6fe4a6d3b4be\") " pod="openshift-dns-operator/dns-operator-744455d44c-8hb86" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215323 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7b44e76-07c3-4876-965a-d7fe4c3c8e41-serving-cert\") pod \"authentication-operator-69f744f599-chjp6\" (UID: \"f7b44e76-07c3-4876-965a-d7fe4c3c8e41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-chjp6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215339 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b5fec51d-bef3-426c-ba74-90a48a94d9ce-registry-tls\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215356 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20e67d3a-290d-4ac2-b3ef-69330f127f52-serving-cert\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215368 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqv56\" (UniqueName: \"kubernetes.io/projected/72c37509-1c20-49e9-9a41-49a79f90a2e2-kube-api-access-hqv56\") pod \"openshift-config-operator-7777fb866f-42nbd\" (UID: \"72c37509-1c20-49e9-9a41-49a79f90a2e2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42nbd" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215381 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da-metrics-tls\") pod \"ingress-operator-5b745b69d9-jn5vx\" (UID: \"ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jn5vx" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215394 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-console-oauth-config\") pod \"console-f9d7485db-t4hjr\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215415 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjcn6\" (UniqueName: \"kubernetes.io/projected/38ff4075-9f86-4457-af3b-a1db2ea74bf7-kube-api-access-rjcn6\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5k24\" (UID: \"38ff4075-9f86-4457-af3b-a1db2ea74bf7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5k24" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215445 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215460 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-client-ca\") pod \"route-controller-manager-6576b87f9c-g6ljh\" (UID: \"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215482 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3dddee8f-f24e-4777-ba27-418ef5ef30c5-etcd-client\") pod \"etcd-operator-b45778765-pzjgw\" (UID: \"3dddee8f-f24e-4777-ba27-418ef5ef30c5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzjgw" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215496 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38ff4075-9f86-4457-af3b-a1db2ea74bf7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5k24\" (UID: \"38ff4075-9f86-4457-af3b-a1db2ea74bf7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5k24" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215525 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxgv4\" (UniqueName: \"kubernetes.io/projected/5886f6e5-57aa-4e8d-adb5-a0cb9f1648db-kube-api-access-mxgv4\") pod \"console-operator-58897d9998-6v7lm\" (UID: \"5886f6e5-57aa-4e8d-adb5-a0cb9f1648db\") " pod="openshift-console-operator/console-operator-58897d9998-6v7lm" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215540 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c282a9e9-3ef7-41fa-8df8-89ae6aa7f592-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xxz98\" (UID: \"c282a9e9-3ef7-41fa-8df8-89ae6aa7f592\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxz98" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215554 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b50e1f6-6ac8-4672-a14e-6fe4a6d3b4be-metrics-tls\") pod \"dns-operator-744455d44c-8hb86\" (UID: \"1b50e1f6-6ac8-4672-a14e-6fe4a6d3b4be\") " pod="openshift-dns-operator/dns-operator-744455d44c-8hb86" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215566 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dddee8f-f24e-4777-ba27-418ef5ef30c5-config\") pod \"etcd-operator-b45778765-pzjgw\" (UID: \"3dddee8f-f24e-4777-ba27-418ef5ef30c5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzjgw" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215579 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-service-ca\") pod \"console-f9d7485db-t4hjr\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:29 crc kubenswrapper[4796]: E1205 10:29:29.215715 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:29.715704838 +0000 UTC m=+116.003810350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215877 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b5fec51d-bef3-426c-ba74-90a48a94d9ce-registry-certificates\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215907 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktqsk\" (UniqueName: \"kubernetes.io/projected/b5fec51d-bef3-426c-ba74-90a48a94d9ce-kube-api-access-ktqsk\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215948 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jn5vx\" (UID: \"ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jn5vx" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215965 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/93a93c24-45ae-43e1-9349-7471c6a218f8-image-import-ca\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215979 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbsz9\" (UniqueName: \"kubernetes.io/projected/93a93c24-45ae-43e1-9349-7471c6a218f8-kube-api-access-sbsz9\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.215995 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216009 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216056 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxmbz\" (UniqueName: \"kubernetes.io/projected/20e67d3a-290d-4ac2-b3ef-69330f127f52-kube-api-access-hxmbz\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216072 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfe92dbc-47df-44cc-bd99-123052631ff4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nxq92\" (UID: \"dfe92dbc-47df-44cc-bd99-123052631ff4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nxq92" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216103 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5886f6e5-57aa-4e8d-adb5-a0cb9f1648db-config\") pod \"console-operator-58897d9998-6v7lm\" (UID: \"5886f6e5-57aa-4e8d-adb5-a0cb9f1648db\") " pod="openshift-console-operator/console-operator-58897d9998-6v7lm" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216120 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfrnx\" (UniqueName: \"kubernetes.io/projected/1aadfdc4-1cb5-432f-ab33-81df64ceb763-kube-api-access-sfrnx\") pod \"openshift-apiserver-operator-796bbdcf4f-n42q4\" (UID: \"1aadfdc4-1cb5-432f-ab33-81df64ceb763\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n42q4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216136 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82c1511d-7bf8-4245-9593-f2de287b9dd6-config\") pod \"machine-approver-56656f9798-54wv6\" (UID: \"82c1511d-7bf8-4245-9593-f2de287b9dd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-54wv6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216149 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3dddee8f-f24e-4777-ba27-418ef5ef30c5-etcd-service-ca\") pod \"etcd-operator-b45778765-pzjgw\" (UID: \"3dddee8f-f24e-4777-ba27-418ef5ef30c5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzjgw" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216164 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/93a93c24-45ae-43e1-9349-7471c6a218f8-etcd-serving-ca\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216200 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe92dbc-47df-44cc-bd99-123052631ff4-config\") pod \"kube-controller-manager-operator-78b949d7b-nxq92\" (UID: \"dfe92dbc-47df-44cc-bd99-123052631ff4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nxq92" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216216 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b5fec51d-bef3-426c-ba74-90a48a94d9ce-installation-pull-secrets\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216231 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/82c1511d-7bf8-4245-9593-f2de287b9dd6-auth-proxy-config\") pod \"machine-approver-56656f9798-54wv6\" (UID: \"82c1511d-7bf8-4245-9593-f2de287b9dd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-54wv6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216281 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b8469b6-c8dd-4b39-81cb-ab35657cdfb5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2f4ks\" (UID: \"4b8469b6-c8dd-4b39-81cb-ab35657cdfb5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2f4ks" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216295 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7xdp\" (UniqueName: \"kubernetes.io/projected/3dddee8f-f24e-4777-ba27-418ef5ef30c5-kube-api-access-q7xdp\") pod \"etcd-operator-b45778765-pzjgw\" (UID: \"3dddee8f-f24e-4777-ba27-418ef5ef30c5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzjgw" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216318 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20e67d3a-290d-4ac2-b3ef-69330f127f52-etcd-client\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216335 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-trusted-ca-bundle\") pod \"console-f9d7485db-t4hjr\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216374 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216390 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216497 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216559 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-console-serving-cert\") pod \"console-f9d7485db-t4hjr\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216592 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0677773-5515-4b74-9975-75dc72e6a127-client-ca\") pod \"controller-manager-879f6c89f-j7krr\" (UID: \"a0677773-5515-4b74-9975-75dc72e6a127\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216622 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20e67d3a-290d-4ac2-b3ef-69330f127f52-audit-policies\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216643 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93a93c24-45ae-43e1-9349-7471c6a218f8-config\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216662 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e579b64c-823d-4798-81c7-52c515a4f9f1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2tpfm\" (UID: \"e579b64c-823d-4798-81c7-52c515a4f9f1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2tpfm" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216708 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/20e67d3a-290d-4ac2-b3ef-69330f127f52-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216729 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3dddee8f-f24e-4777-ba27-418ef5ef30c5-etcd-ca\") pod \"etcd-operator-b45778765-pzjgw\" (UID: \"3dddee8f-f24e-4777-ba27-418ef5ef30c5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzjgw" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216745 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/93a93c24-45ae-43e1-9349-7471c6a218f8-audit\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216760 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/93a93c24-45ae-43e1-9349-7471c6a218f8-audit-dir\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216775 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-audit-policies\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216790 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5886f6e5-57aa-4e8d-adb5-a0cb9f1648db-serving-cert\") pod \"console-operator-58897d9998-6v7lm\" (UID: \"5886f6e5-57aa-4e8d-adb5-a0cb9f1648db\") " pod="openshift-console-operator/console-operator-58897d9998-6v7lm" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216804 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c481089-1d82-466f-b015-541a729f07b7-metrics-certs\") pod \"router-default-5444994796-zsscn\" (UID: \"8c481089-1d82-466f-b015-541a729f07b7\") " pod="openshift-ingress/router-default-5444994796-zsscn" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216821 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-oauth-serving-cert\") pod \"console-f9d7485db-t4hjr\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216848 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0677773-5515-4b74-9975-75dc72e6a127-config\") pod \"controller-manager-879f6c89f-j7krr\" (UID: \"a0677773-5515-4b74-9975-75dc72e6a127\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216863 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fd8fbf42-d1ee-4ca8-bb56-491c068bbf4a-proxy-tls\") pod \"machine-config-controller-84d6567774-lwbh6\" (UID: \"fd8fbf42-d1ee-4ca8-bb56-491c068bbf4a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwbh6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216878 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9zpp\" (UniqueName: \"kubernetes.io/projected/fd8fbf42-d1ee-4ca8-bb56-491c068bbf4a-kube-api-access-k9zpp\") pod \"machine-config-controller-84d6567774-lwbh6\" (UID: \"fd8fbf42-d1ee-4ca8-bb56-491c068bbf4a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwbh6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216892 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69qvf\" (UniqueName: \"kubernetes.io/projected/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-kube-api-access-69qvf\") pod \"console-f9d7485db-t4hjr\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216906 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfe92dbc-47df-44cc-bd99-123052631ff4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nxq92\" (UID: \"dfe92dbc-47df-44cc-bd99-123052631ff4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nxq92" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216922 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c282a9e9-3ef7-41fa-8df8-89ae6aa7f592-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xxz98\" (UID: \"c282a9e9-3ef7-41fa-8df8-89ae6aa7f592\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxz98" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216935 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-console-config\") pod \"console-f9d7485db-t4hjr\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216967 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c282a9e9-3ef7-41fa-8df8-89ae6aa7f592-config\") pod \"kube-apiserver-operator-766d6c64bb-xxz98\" (UID: \"c282a9e9-3ef7-41fa-8df8-89ae6aa7f592\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxz98" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.216994 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38ff4075-9f86-4457-af3b-a1db2ea74bf7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5k24\" (UID: \"38ff4075-9f86-4457-af3b-a1db2ea74bf7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5k24" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.217022 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95sfn\" (UniqueName: \"kubernetes.io/projected/4b8469b6-c8dd-4b39-81cb-ab35657cdfb5-kube-api-access-95sfn\") pod \"cluster-image-registry-operator-dc59b4c8b-2f4ks\" (UID: \"4b8469b6-c8dd-4b39-81cb-ab35657cdfb5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2f4ks" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.217037 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8zrx\" (UniqueName: \"kubernetes.io/projected/f7b44e76-07c3-4876-965a-d7fe4c3c8e41-kube-api-access-k8zrx\") pod \"authentication-operator-69f744f599-chjp6\" (UID: \"f7b44e76-07c3-4876-965a-d7fe4c3c8e41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-chjp6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.217052 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6978z\" (UniqueName: \"kubernetes.io/projected/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-kube-api-access-6978z\") pod \"route-controller-manager-6576b87f9c-g6ljh\" (UID: \"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.217065 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.217081 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aadfdc4-1cb5-432f-ab33-81df64ceb763-config\") pod \"openshift-apiserver-operator-796bbdcf4f-n42q4\" (UID: \"1aadfdc4-1cb5-432f-ab33-81df64ceb763\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n42q4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.217094 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8c481089-1d82-466f-b015-541a729f07b7-stats-auth\") pod \"router-default-5444994796-zsscn\" (UID: \"8c481089-1d82-466f-b015-541a729f07b7\") " pod="openshift-ingress/router-default-5444994796-zsscn" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.230504 4796 request.go:700] Waited for 1.198965276s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-secret&limit=500&resourceVersion=0 Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.231377 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.271967 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.291731 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.312187 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.318202 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:29 crc kubenswrapper[4796]: E1205 10:29:29.318325 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:29.818308363 +0000 UTC m=+116.106413876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.318353 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dddee8f-f24e-4777-ba27-418ef5ef30c5-serving-cert\") pod \"etcd-operator-b45778765-pzjgw\" (UID: \"3dddee8f-f24e-4777-ba27-418ef5ef30c5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzjgw" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.318378 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac76ab3e-e33f-4a64-8f67-b65675369db5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wzr5n\" (UID: \"ac76ab3e-e33f-4a64-8f67-b65675369db5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wzr5n" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.318394 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/407028a3-3d79-420a-9dcf-751bb138e022-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gzkjl\" (UID: \"407028a3-3d79-420a-9dcf-751bb138e022\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzkjl" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.318412 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.318429 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vgjg\" (UniqueName: \"kubernetes.io/projected/3fec5c26-b7fd-4f6f-aa94-503312e81cdb-kube-api-access-2vgjg\") pod \"catalog-operator-68c6474976-5b8dg\" (UID: \"3fec5c26-b7fd-4f6f-aa94-503312e81cdb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5b8dg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.318445 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae18ea6b-e8c4-46e8-b9df-49944b44a999-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-blx8z\" (UID: \"ae18ea6b-e8c4-46e8-b9df-49944b44a999\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blx8z" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.318511 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9kmm\" (UniqueName: \"kubernetes.io/projected/dc21f1de-fae3-4d9a-862e-e33257b4f177-kube-api-access-g9kmm\") pod \"downloads-7954f5f757-jfp56\" (UID: \"dc21f1de-fae3-4d9a-862e-e33257b4f177\") " pod="openshift-console/downloads-7954f5f757-jfp56" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.318531 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72c37509-1c20-49e9-9a41-49a79f90a2e2-serving-cert\") pod \"openshift-config-operator-7777fb866f-42nbd\" (UID: \"72c37509-1c20-49e9-9a41-49a79f90a2e2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42nbd" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.318551 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn9tx\" (UniqueName: \"kubernetes.io/projected/010954d4-08ed-47a9-a504-8149788b8b65-kube-api-access-bn9tx\") pod \"dns-default-9fpsf\" (UID: \"010954d4-08ed-47a9-a504-8149788b8b65\") " pod="openshift-dns/dns-default-9fpsf" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.318666 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvswd\" (UniqueName: \"kubernetes.io/projected/407028a3-3d79-420a-9dcf-751bb138e022-kube-api-access-mvswd\") pod \"kube-storage-version-migrator-operator-b67b599dd-gzkjl\" (UID: \"407028a3-3d79-420a-9dcf-751bb138e022\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzkjl" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.318710 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b5fec51d-bef3-426c-ba74-90a48a94d9ce-registry-tls\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.318728 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20e67d3a-290d-4ac2-b3ef-69330f127f52-serving-cert\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.318742 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da-metrics-tls\") pod \"ingress-operator-5b745b69d9-jn5vx\" (UID: \"ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jn5vx" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.319054 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ed98b6c8-708c-4282-b1fe-4064fd60f446-signing-cabundle\") pod \"service-ca-9c57cc56f-6ftq2\" (UID: \"ed98b6c8-708c-4282-b1fe-4064fd60f446\") " pod="openshift-service-ca/service-ca-9c57cc56f-6ftq2" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.319072 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac76ab3e-e33f-4a64-8f67-b65675369db5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wzr5n\" (UID: \"ac76ab3e-e33f-4a64-8f67-b65675369db5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wzr5n" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.319109 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.319125 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38ff4075-9f86-4457-af3b-a1db2ea74bf7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5k24\" (UID: \"38ff4075-9f86-4457-af3b-a1db2ea74bf7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5k24" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.319165 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3fec5c26-b7fd-4f6f-aa94-503312e81cdb-profile-collector-cert\") pod \"catalog-operator-68c6474976-5b8dg\" (UID: \"3fec5c26-b7fd-4f6f-aa94-503312e81cdb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5b8dg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.319185 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c282a9e9-3ef7-41fa-8df8-89ae6aa7f592-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xxz98\" (UID: \"c282a9e9-3ef7-41fa-8df8-89ae6aa7f592\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxz98" Dec 05 10:29:29 crc kubenswrapper[4796]: E1205 10:29:29.319383 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:29.819372051 +0000 UTC m=+116.107477565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.319741 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38ff4075-9f86-4457-af3b-a1db2ea74bf7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5k24\" (UID: \"38ff4075-9f86-4457-af3b-a1db2ea74bf7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5k24" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.319833 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b50e1f6-6ac8-4672-a14e-6fe4a6d3b4be-metrics-tls\") pod \"dns-operator-744455d44c-8hb86\" (UID: \"1b50e1f6-6ac8-4672-a14e-6fe4a6d3b4be\") " pod="openshift-dns-operator/dns-operator-744455d44c-8hb86" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.319895 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9a043ce0-300f-49b2-9f8f-30497aa70426-registration-dir\") pod \"csi-hostpathplugin-n87bt\" (UID: \"9a043ce0-300f-49b2-9f8f-30497aa70426\") " pod="hostpath-provisioner/csi-hostpathplugin-n87bt" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.319924 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a093a74f-f4f8-43ff-9f16-232761ad58e0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-w7mdv\" (UID: \"a093a74f-f4f8-43ff-9f16-232761ad58e0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w7mdv" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.319969 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktqsk\" (UniqueName: \"kubernetes.io/projected/b5fec51d-bef3-426c-ba74-90a48a94d9ce-kube-api-access-ktqsk\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.319989 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jn5vx\" (UID: \"ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jn5vx" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.320004 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/93a93c24-45ae-43e1-9349-7471c6a218f8-image-import-ca\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.320107 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.320124 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfe92dbc-47df-44cc-bd99-123052631ff4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nxq92\" (UID: \"dfe92dbc-47df-44cc-bd99-123052631ff4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nxq92" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.320158 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgb6p\" (UniqueName: \"kubernetes.io/projected/7f15aff4-f714-4cc5-a551-36653ade84e4-kube-api-access-vgb6p\") pod \"machine-config-operator-74547568cd-gw4sz\" (UID: \"7f15aff4-f714-4cc5-a551-36653ade84e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gw4sz" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.320182 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5886f6e5-57aa-4e8d-adb5-a0cb9f1648db-config\") pod \"console-operator-58897d9998-6v7lm\" (UID: \"5886f6e5-57aa-4e8d-adb5-a0cb9f1648db\") " pod="openshift-console-operator/console-operator-58897d9998-6v7lm" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.320269 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82c1511d-7bf8-4245-9593-f2de287b9dd6-config\") pod \"machine-approver-56656f9798-54wv6\" (UID: \"82c1511d-7bf8-4245-9593-f2de287b9dd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-54wv6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.320289 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3dddee8f-f24e-4777-ba27-418ef5ef30c5-etcd-service-ca\") pod \"etcd-operator-b45778765-pzjgw\" (UID: \"3dddee8f-f24e-4777-ba27-418ef5ef30c5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzjgw" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.320697 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.320752 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82c1511d-7bf8-4245-9593-f2de287b9dd6-config\") pod \"machine-approver-56656f9798-54wv6\" (UID: \"82c1511d-7bf8-4245-9593-f2de287b9dd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-54wv6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.320807 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/93a93c24-45ae-43e1-9349-7471c6a218f8-etcd-serving-ca\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.320825 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9a043ce0-300f-49b2-9f8f-30497aa70426-mountpoint-dir\") pod \"csi-hostpathplugin-n87bt\" (UID: \"9a043ce0-300f-49b2-9f8f-30497aa70426\") " pod="hostpath-provisioner/csi-hostpathplugin-n87bt" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.320840 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7xdp\" (UniqueName: \"kubernetes.io/projected/3dddee8f-f24e-4777-ba27-418ef5ef30c5-kube-api-access-q7xdp\") pod \"etcd-operator-b45778765-pzjgw\" (UID: \"3dddee8f-f24e-4777-ba27-418ef5ef30c5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzjgw" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.320952 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/93a93c24-45ae-43e1-9349-7471c6a218f8-image-import-ca\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.321186 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3dddee8f-f24e-4777-ba27-418ef5ef30c5-etcd-service-ca\") pod \"etcd-operator-b45778765-pzjgw\" (UID: \"3dddee8f-f24e-4777-ba27-418ef5ef30c5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzjgw" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.321244 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/93a93c24-45ae-43e1-9349-7471c6a218f8-etcd-serving-ca\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.321301 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9a043ce0-300f-49b2-9f8f-30497aa70426-csi-data-dir\") pod \"csi-hostpathplugin-n87bt\" (UID: \"9a043ce0-300f-49b2-9f8f-30497aa70426\") " pod="hostpath-provisioner/csi-hostpathplugin-n87bt" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.321388 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b8469b6-c8dd-4b39-81cb-ab35657cdfb5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2f4ks\" (UID: \"4b8469b6-c8dd-4b39-81cb-ab35657cdfb5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2f4ks" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.321558 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5886f6e5-57aa-4e8d-adb5-a0cb9f1648db-config\") pod \"console-operator-58897d9998-6v7lm\" (UID: \"5886f6e5-57aa-4e8d-adb5-a0cb9f1648db\") " pod="openshift-console-operator/console-operator-58897d9998-6v7lm" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.321578 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20e67d3a-290d-4ac2-b3ef-69330f127f52-etcd-client\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.321744 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.321824 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7f15aff4-f714-4cc5-a551-36653ade84e4-images\") pod \"machine-config-operator-74547568cd-gw4sz\" (UID: \"7f15aff4-f714-4cc5-a551-36653ade84e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gw4sz" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.321926 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93a93c24-45ae-43e1-9349-7471c6a218f8-config\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.321952 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e579b64c-823d-4798-81c7-52c515a4f9f1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2tpfm\" (UID: \"e579b64c-823d-4798-81c7-52c515a4f9f1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2tpfm" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.321968 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20e67d3a-290d-4ac2-b3ef-69330f127f52-audit-policies\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.321982 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/93a93c24-45ae-43e1-9349-7471c6a218f8-audit\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.321995 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3dddee8f-f24e-4777-ba27-418ef5ef30c5-etcd-ca\") pod \"etcd-operator-b45778765-pzjgw\" (UID: \"3dddee8f-f24e-4777-ba27-418ef5ef30c5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzjgw" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322011 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3fec5c26-b7fd-4f6f-aa94-503312e81cdb-srv-cert\") pod \"catalog-operator-68c6474976-5b8dg\" (UID: \"3fec5c26-b7fd-4f6f-aa94-503312e81cdb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5b8dg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322027 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f9ebd86c-0c38-4954-8c59-a4e0168fb2d5-secret-volume\") pod \"collect-profiles-29415495-lj8xl\" (UID: \"f9ebd86c-0c38-4954-8c59-a4e0168fb2d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415495-lj8xl" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322042 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c481089-1d82-466f-b015-541a729f07b7-metrics-certs\") pod \"router-default-5444994796-zsscn\" (UID: \"8c481089-1d82-466f-b015-541a729f07b7\") " pod="openshift-ingress/router-default-5444994796-zsscn" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322060 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fd8fbf42-d1ee-4ca8-bb56-491c068bbf4a-proxy-tls\") pod \"machine-config-controller-84d6567774-lwbh6\" (UID: \"fd8fbf42-d1ee-4ca8-bb56-491c068bbf4a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwbh6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322075 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/356c2f0d-a078-42e4-925a-e4f39864eb48-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gmlb4\" (UID: \"356c2f0d-a078-42e4-925a-e4f39864eb48\") " pod="openshift-marketplace/marketplace-operator-79b997595-gmlb4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322093 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/766637c3-bd89-4f55-950e-68c553a5c6a4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dktck\" (UID: \"766637c3-bd89-4f55-950e-68c553a5c6a4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dktck" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322107 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9ebd86c-0c38-4954-8c59-a4e0168fb2d5-config-volume\") pod \"collect-profiles-29415495-lj8xl\" (UID: \"f9ebd86c-0c38-4954-8c59-a4e0168fb2d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415495-lj8xl" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322121 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/407028a3-3d79-420a-9dcf-751bb138e022-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gzkjl\" (UID: \"407028a3-3d79-420a-9dcf-751bb138e022\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzkjl" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322135 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz4mw\" (UniqueName: \"kubernetes.io/projected/b627bdb1-a428-4272-b9b4-2732a8cb4b2c-kube-api-access-wz4mw\") pod \"ingress-canary-hktfd\" (UID: \"b627bdb1-a428-4272-b9b4-2732a8cb4b2c\") " pod="openshift-ingress-canary/ingress-canary-hktfd" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322152 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8zrx\" (UniqueName: \"kubernetes.io/projected/f7b44e76-07c3-4876-965a-d7fe4c3c8e41-kube-api-access-k8zrx\") pod \"authentication-operator-69f744f599-chjp6\" (UID: \"f7b44e76-07c3-4876-965a-d7fe4c3c8e41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-chjp6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322170 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aadfdc4-1cb5-432f-ab33-81df64ceb763-config\") pod \"openshift-apiserver-operator-796bbdcf4f-n42q4\" (UID: \"1aadfdc4-1cb5-432f-ab33-81df64ceb763\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n42q4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322187 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b49dp\" (UniqueName: \"kubernetes.io/projected/82c1511d-7bf8-4245-9593-f2de287b9dd6-kube-api-access-b49dp\") pod \"machine-approver-56656f9798-54wv6\" (UID: \"82c1511d-7bf8-4245-9593-f2de287b9dd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-54wv6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322202 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77837609-281b-417a-8398-7732463eb92a-config\") pod \"machine-api-operator-5694c8668f-z2zft\" (UID: \"77837609-281b-417a-8398-7732463eb92a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2zft" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322218 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhltr\" (UniqueName: \"kubernetes.io/projected/ed98b6c8-708c-4282-b1fe-4064fd60f446-kube-api-access-jhltr\") pod \"service-ca-9c57cc56f-6ftq2\" (UID: \"ed98b6c8-708c-4282-b1fe-4064fd60f446\") " pod="openshift-service-ca/service-ca-9c57cc56f-6ftq2" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322232 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a842f36-0393-4416-b05b-9caa7959e8d8-webhook-cert\") pod \"packageserver-d55dfcdfc-ngdm4\" (UID: \"1a842f36-0393-4416-b05b-9caa7959e8d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ngdm4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322247 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtxgs\" (UniqueName: \"kubernetes.io/projected/a5c7cd05-933f-4424-92c8-5a7023694cc1-kube-api-access-mtxgs\") pod \"olm-operator-6b444d44fb-r45ng\" (UID: \"a5c7cd05-933f-4424-92c8-5a7023694cc1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r45ng" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322267 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-config\") pod \"route-controller-manager-6576b87f9c-g6ljh\" (UID: \"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322282 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdtgs\" (UniqueName: \"kubernetes.io/projected/56729699-46b2-454c-83d5-9dce9d90ac49-kube-api-access-sdtgs\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322297 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/93a93c24-45ae-43e1-9349-7471c6a218f8-etcd-client\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322311 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t9z7\" (UniqueName: \"kubernetes.io/projected/356c2f0d-a078-42e4-925a-e4f39864eb48-kube-api-access-6t9z7\") pod \"marketplace-operator-79b997595-gmlb4\" (UID: \"356c2f0d-a078-42e4-925a-e4f39864eb48\") " pod="openshift-marketplace/marketplace-operator-79b997595-gmlb4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322325 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fd8fbf42-d1ee-4ca8-bb56-491c068bbf4a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lwbh6\" (UID: \"fd8fbf42-d1ee-4ca8-bb56-491c068bbf4a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwbh6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322340 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1aadfdc4-1cb5-432f-ab33-81df64ceb763-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-n42q4\" (UID: \"1aadfdc4-1cb5-432f-ab33-81df64ceb763\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n42q4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322341 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b5fec51d-bef3-426c-ba74-90a48a94d9ce-registry-tls\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322354 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8knpz\" (UniqueName: \"kubernetes.io/projected/766637c3-bd89-4f55-950e-68c553a5c6a4-kube-api-access-8knpz\") pod \"control-plane-machine-set-operator-78cbb6b69f-dktck\" (UID: \"766637c3-bd89-4f55-950e-68c553a5c6a4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dktck" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322402 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20e67d3a-290d-4ac2-b3ef-69330f127f52-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322421 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0677773-5515-4b74-9975-75dc72e6a127-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-j7krr\" (UID: \"a0677773-5515-4b74-9975-75dc72e6a127\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322437 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-serving-cert\") pod \"route-controller-manager-6576b87f9c-g6ljh\" (UID: \"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322453 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99tcd\" (UniqueName: \"kubernetes.io/projected/77837609-281b-417a-8398-7732463eb92a-kube-api-access-99tcd\") pod \"machine-api-operator-5694c8668f-z2zft\" (UID: \"77837609-281b-417a-8398-7732463eb92a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2zft" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322480 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20e67d3a-290d-4ac2-b3ef-69330f127f52-serving-cert\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.322974 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93a93c24-45ae-43e1-9349-7471c6a218f8-config\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.323121 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9a043ce0-300f-49b2-9f8f-30497aa70426-plugins-dir\") pod \"csi-hostpathplugin-n87bt\" (UID: \"9a043ce0-300f-49b2-9f8f-30497aa70426\") " pod="hostpath-provisioner/csi-hostpathplugin-n87bt" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.323624 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/93a93c24-45ae-43e1-9349-7471c6a218f8-audit\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.323660 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aadfdc4-1cb5-432f-ab33-81df64ceb763-config\") pod \"openshift-apiserver-operator-796bbdcf4f-n42q4\" (UID: \"1aadfdc4-1cb5-432f-ab33-81df64ceb763\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n42q4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.323633 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3dddee8f-f24e-4777-ba27-418ef5ef30c5-etcd-ca\") pod \"etcd-operator-b45778765-pzjgw\" (UID: \"3dddee8f-f24e-4777-ba27-418ef5ef30c5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzjgw" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.323857 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/72c37509-1c20-49e9-9a41-49a79f90a2e2-serving-cert\") pod \"openshift-config-operator-7777fb866f-42nbd\" (UID: \"72c37509-1c20-49e9-9a41-49a79f90a2e2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42nbd" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.324446 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fd8fbf42-d1ee-4ca8-bb56-491c068bbf4a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lwbh6\" (UID: \"fd8fbf42-d1ee-4ca8-bb56-491c068bbf4a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwbh6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.324871 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da-metrics-tls\") pod \"ingress-operator-5b745b69d9-jn5vx\" (UID: \"ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jn5vx" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.325152 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/4b8469b6-c8dd-4b39-81cb-ab35657cdfb5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-2f4ks\" (UID: \"4b8469b6-c8dd-4b39-81cb-ab35657cdfb5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2f4ks" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.325242 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-config\") pod \"route-controller-manager-6576b87f9c-g6ljh\" (UID: \"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.325384 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56729699-46b2-454c-83d5-9dce9d90ac49-audit-dir\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.325421 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20e67d3a-290d-4ac2-b3ef-69330f127f52-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.325739 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/20e67d3a-290d-4ac2-b3ef-69330f127f52-audit-policies\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.323151 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56729699-46b2-454c-83d5-9dce9d90ac49-audit-dir\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.326005 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.326085 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.326224 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c282a9e9-3ef7-41fa-8df8-89ae6aa7f592-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xxz98\" (UID: \"c282a9e9-3ef7-41fa-8df8-89ae6aa7f592\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxz98" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.326292 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fd8fbf42-d1ee-4ca8-bb56-491c068bbf4a-proxy-tls\") pod \"machine-config-controller-84d6567774-lwbh6\" (UID: \"fd8fbf42-d1ee-4ca8-bb56-491c068bbf4a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwbh6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.326410 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20e67d3a-290d-4ac2-b3ef-69330f127f52-audit-dir\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.326500 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b8469b6-c8dd-4b39-81cb-ab35657cdfb5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2f4ks\" (UID: \"4b8469b6-c8dd-4b39-81cb-ab35657cdfb5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2f4ks" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.326583 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pww42\" (UniqueName: \"kubernetes.io/projected/ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da-kube-api-access-pww42\") pod \"ingress-operator-5b745b69d9-jn5vx\" (UID: \"ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jn5vx" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.326647 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hp7t\" (UniqueName: \"kubernetes.io/projected/f9ebd86c-0c38-4954-8c59-a4e0168fb2d5-kube-api-access-4hp7t\") pod \"collect-profiles-29415495-lj8xl\" (UID: \"f9ebd86c-0c38-4954-8c59-a4e0168fb2d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415495-lj8xl" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.326752 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.326816 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ed98b6c8-708c-4282-b1fe-4064fd60f446-signing-key\") pod \"service-ca-9c57cc56f-6ftq2\" (UID: \"ed98b6c8-708c-4282-b1fe-4064fd60f446\") " pod="openshift-service-ca/service-ca-9c57cc56f-6ftq2" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.326877 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jh9f\" (UniqueName: \"kubernetes.io/projected/a0677773-5515-4b74-9975-75dc72e6a127-kube-api-access-6jh9f\") pod \"controller-manager-879f6c89f-j7krr\" (UID: \"a0677773-5515-4b74-9975-75dc72e6a127\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.326946 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.327008 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5c7cd05-933f-4424-92c8-5a7023694cc1-srv-cert\") pod \"olm-operator-6b444d44fb-r45ng\" (UID: \"a5c7cd05-933f-4424-92c8-5a7023694cc1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r45ng" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.327103 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7b44e76-07c3-4876-965a-d7fe4c3c8e41-config\") pod \"authentication-operator-69f744f599-chjp6\" (UID: \"f7b44e76-07c3-4876-965a-d7fe4c3c8e41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-chjp6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.327163 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg2rc\" (UniqueName: \"kubernetes.io/projected/b57e8bb9-165e-44d6-8ca2-f14462dd7d37-kube-api-access-bg2rc\") pod \"service-ca-operator-777779d784-lxxz8\" (UID: \"b57e8bb9-165e-44d6-8ca2-f14462dd7d37\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxxz8" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.327223 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1b50e1f6-6ac8-4672-a14e-6fe4a6d3b4be-metrics-tls\") pod \"dns-operator-744455d44c-8hb86\" (UID: \"1b50e1f6-6ac8-4672-a14e-6fe4a6d3b4be\") " pod="openshift-dns-operator/dns-operator-744455d44c-8hb86" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.327230 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a842f36-0393-4416-b05b-9caa7959e8d8-apiservice-cert\") pod \"packageserver-d55dfcdfc-ngdm4\" (UID: \"1a842f36-0393-4416-b05b-9caa7959e8d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ngdm4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.326448 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/20e67d3a-290d-4ac2-b3ef-69330f127f52-audit-dir\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.327284 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93a93c24-45ae-43e1-9349-7471c6a218f8-serving-cert\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.327660 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7b44e76-07c3-4876-965a-d7fe4c3c8e41-config\") pod \"authentication-operator-69f744f599-chjp6\" (UID: \"f7b44e76-07c3-4876-965a-d7fe4c3c8e41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-chjp6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.327830 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0677773-5515-4b74-9975-75dc72e6a127-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-j7krr\" (UID: \"a0677773-5515-4b74-9975-75dc72e6a127\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328225 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328263 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0677773-5515-4b74-9975-75dc72e6a127-serving-cert\") pod \"controller-manager-879f6c89f-j7krr\" (UID: \"a0677773-5515-4b74-9975-75dc72e6a127\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328281 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/82c1511d-7bf8-4245-9593-f2de287b9dd6-machine-approver-tls\") pod \"machine-approver-56656f9798-54wv6\" (UID: \"82c1511d-7bf8-4245-9593-f2de287b9dd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-54wv6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328299 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5fec51d-bef3-426c-ba74-90a48a94d9ce-trusted-ca\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328313 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5886f6e5-57aa-4e8d-adb5-a0cb9f1648db-trusted-ca\") pod \"console-operator-58897d9998-6v7lm\" (UID: \"5886f6e5-57aa-4e8d-adb5-a0cb9f1648db\") " pod="openshift-console-operator/console-operator-58897d9998-6v7lm" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328328 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq646\" (UniqueName: \"kubernetes.io/projected/1b50e1f6-6ac8-4672-a14e-6fe4a6d3b4be-kube-api-access-mq646\") pod \"dns-operator-744455d44c-8hb86\" (UID: \"1b50e1f6-6ac8-4672-a14e-6fe4a6d3b4be\") " pod="openshift-dns-operator/dns-operator-744455d44c-8hb86" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328342 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7b44e76-07c3-4876-965a-d7fe4c3c8e41-serving-cert\") pod \"authentication-operator-69f744f599-chjp6\" (UID: \"f7b44e76-07c3-4876-965a-d7fe4c3c8e41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-chjp6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328359 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7f15aff4-f714-4cc5-a551-36653ade84e4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gw4sz\" (UID: \"7f15aff4-f714-4cc5-a551-36653ade84e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gw4sz" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328375 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b57e8bb9-165e-44d6-8ca2-f14462dd7d37-serving-cert\") pod \"service-ca-operator-777779d784-lxxz8\" (UID: \"b57e8bb9-165e-44d6-8ca2-f14462dd7d37\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxxz8" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328392 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqv56\" (UniqueName: \"kubernetes.io/projected/72c37509-1c20-49e9-9a41-49a79f90a2e2-kube-api-access-hqv56\") pod \"openshift-config-operator-7777fb866f-42nbd\" (UID: \"72c37509-1c20-49e9-9a41-49a79f90a2e2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42nbd" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328406 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-console-oauth-config\") pod \"console-f9d7485db-t4hjr\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328407 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-serving-cert\") pod \"route-controller-manager-6576b87f9c-g6ljh\" (UID: \"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328422 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp6rj\" (UniqueName: \"kubernetes.io/projected/ae18ea6b-e8c4-46e8-b9df-49944b44a999-kube-api-access-fp6rj\") pod \"package-server-manager-789f6589d5-blx8z\" (UID: \"ae18ea6b-e8c4-46e8-b9df-49944b44a999\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blx8z" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328439 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjcn6\" (UniqueName: \"kubernetes.io/projected/38ff4075-9f86-4457-af3b-a1db2ea74bf7-kube-api-access-rjcn6\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5k24\" (UID: \"38ff4075-9f86-4457-af3b-a1db2ea74bf7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5k24" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328456 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-client-ca\") pod \"route-controller-manager-6576b87f9c-g6ljh\" (UID: \"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328484 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3dddee8f-f24e-4777-ba27-418ef5ef30c5-etcd-client\") pod \"etcd-operator-b45778765-pzjgw\" (UID: \"3dddee8f-f24e-4777-ba27-418ef5ef30c5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzjgw" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328500 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac76ab3e-e33f-4a64-8f67-b65675369db5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wzr5n\" (UID: \"ac76ab3e-e33f-4a64-8f67-b65675369db5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wzr5n" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328517 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dddee8f-f24e-4777-ba27-418ef5ef30c5-config\") pod \"etcd-operator-b45778765-pzjgw\" (UID: \"3dddee8f-f24e-4777-ba27-418ef5ef30c5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzjgw" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328532 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-service-ca\") pod \"console-f9d7485db-t4hjr\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328547 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/010954d4-08ed-47a9-a504-8149788b8b65-config-volume\") pod \"dns-default-9fpsf\" (UID: \"010954d4-08ed-47a9-a504-8149788b8b65\") " pod="openshift-dns/dns-default-9fpsf" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328561 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b627bdb1-a428-4272-b9b4-2732a8cb4b2c-cert\") pod \"ingress-canary-hktfd\" (UID: \"b627bdb1-a428-4272-b9b4-2732a8cb4b2c\") " pod="openshift-ingress-canary/ingress-canary-hktfd" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328575 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxgv4\" (UniqueName: \"kubernetes.io/projected/5886f6e5-57aa-4e8d-adb5-a0cb9f1648db-kube-api-access-mxgv4\") pod \"console-operator-58897d9998-6v7lm\" (UID: \"5886f6e5-57aa-4e8d-adb5-a0cb9f1648db\") " pod="openshift-console-operator/console-operator-58897d9998-6v7lm" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328590 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b5fec51d-bef3-426c-ba74-90a48a94d9ce-registry-certificates\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328604 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbsz9\" (UniqueName: \"kubernetes.io/projected/93a93c24-45ae-43e1-9349-7471c6a218f8-kube-api-access-sbsz9\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328618 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/db1f92cf-f3a0-40a5-91d8-8fbe70b15948-node-bootstrap-token\") pod \"machine-config-server-2mpns\" (UID: \"db1f92cf-f3a0-40a5-91d8-8fbe70b15948\") " pod="openshift-machine-config-operator/machine-config-server-2mpns" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328634 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328649 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxmbz\" (UniqueName: \"kubernetes.io/projected/20e67d3a-290d-4ac2-b3ef-69330f127f52-kube-api-access-hxmbz\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328662 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1a842f36-0393-4416-b05b-9caa7959e8d8-tmpfs\") pod \"packageserver-d55dfcdfc-ngdm4\" (UID: \"1a842f36-0393-4416-b05b-9caa7959e8d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ngdm4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328728 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfrnx\" (UniqueName: \"kubernetes.io/projected/1aadfdc4-1cb5-432f-ab33-81df64ceb763-kube-api-access-sfrnx\") pod \"openshift-apiserver-operator-796bbdcf4f-n42q4\" (UID: \"1aadfdc4-1cb5-432f-ab33-81df64ceb763\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n42q4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328745 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe92dbc-47df-44cc-bd99-123052631ff4-config\") pod \"kube-controller-manager-operator-78b949d7b-nxq92\" (UID: \"dfe92dbc-47df-44cc-bd99-123052631ff4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nxq92" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328759 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b5fec51d-bef3-426c-ba74-90a48a94d9ce-installation-pull-secrets\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328775 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/82c1511d-7bf8-4245-9593-f2de287b9dd6-auth-proxy-config\") pod \"machine-approver-56656f9798-54wv6\" (UID: \"82c1511d-7bf8-4245-9593-f2de287b9dd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-54wv6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328788 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/010954d4-08ed-47a9-a504-8149788b8b65-metrics-tls\") pod \"dns-default-9fpsf\" (UID: \"010954d4-08ed-47a9-a504-8149788b8b65\") " pod="openshift-dns/dns-default-9fpsf" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328804 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84wzp\" (UniqueName: \"kubernetes.io/projected/de6d4145-d6ed-46d6-980f-07305bfd7933-kube-api-access-84wzp\") pod \"migrator-59844c95c7-l6k4p\" (UID: \"de6d4145-d6ed-46d6-980f-07305bfd7933\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l6k4p" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328819 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-trusted-ca-bundle\") pod \"console-f9d7485db-t4hjr\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328837 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328851 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328866 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-console-serving-cert\") pod \"console-f9d7485db-t4hjr\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328881 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-574tq\" (UniqueName: \"kubernetes.io/projected/a093a74f-f4f8-43ff-9f16-232761ad58e0-kube-api-access-574tq\") pod \"multus-admission-controller-857f4d67dd-w7mdv\" (UID: \"a093a74f-f4f8-43ff-9f16-232761ad58e0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w7mdv" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328896 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0677773-5515-4b74-9975-75dc72e6a127-client-ca\") pod \"controller-manager-879f6c89f-j7krr\" (UID: \"a0677773-5515-4b74-9975-75dc72e6a127\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328910 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/77837609-281b-417a-8398-7732463eb92a-images\") pod \"machine-api-operator-5694c8668f-z2zft\" (UID: \"77837609-281b-417a-8398-7732463eb92a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2zft" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328926 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/20e67d3a-290d-4ac2-b3ef-69330f127f52-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328940 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/93a93c24-45ae-43e1-9349-7471c6a218f8-audit-dir\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328956 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/77837609-281b-417a-8398-7732463eb92a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z2zft\" (UID: \"77837609-281b-417a-8398-7732463eb92a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2zft" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328970 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5c7cd05-933f-4424-92c8-5a7023694cc1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-r45ng\" (UID: \"a5c7cd05-933f-4424-92c8-5a7023694cc1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r45ng" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.328986 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-audit-policies\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329008 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5886f6e5-57aa-4e8d-adb5-a0cb9f1648db-serving-cert\") pod \"console-operator-58897d9998-6v7lm\" (UID: \"5886f6e5-57aa-4e8d-adb5-a0cb9f1648db\") " pod="openshift-console-operator/console-operator-58897d9998-6v7lm" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329022 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-oauth-serving-cert\") pod \"console-f9d7485db-t4hjr\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329037 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtrkv\" (UniqueName: \"kubernetes.io/projected/1a842f36-0393-4416-b05b-9caa7959e8d8-kube-api-access-vtrkv\") pod \"packageserver-d55dfcdfc-ngdm4\" (UID: \"1a842f36-0393-4416-b05b-9caa7959e8d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ngdm4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329062 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0677773-5515-4b74-9975-75dc72e6a127-config\") pod \"controller-manager-879f6c89f-j7krr\" (UID: \"a0677773-5515-4b74-9975-75dc72e6a127\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329078 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9zpp\" (UniqueName: \"kubernetes.io/projected/fd8fbf42-d1ee-4ca8-bb56-491c068bbf4a-kube-api-access-k9zpp\") pod \"machine-config-controller-84d6567774-lwbh6\" (UID: \"fd8fbf42-d1ee-4ca8-bb56-491c068bbf4a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwbh6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329093 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69qvf\" (UniqueName: \"kubernetes.io/projected/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-kube-api-access-69qvf\") pod \"console-f9d7485db-t4hjr\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329107 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfe92dbc-47df-44cc-bd99-123052631ff4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nxq92\" (UID: \"dfe92dbc-47df-44cc-bd99-123052631ff4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nxq92" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329131 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-console-config\") pod \"console-f9d7485db-t4hjr\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329146 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj6qb\" (UniqueName: \"kubernetes.io/projected/db1f92cf-f3a0-40a5-91d8-8fbe70b15948-kube-api-access-gj6qb\") pod \"machine-config-server-2mpns\" (UID: \"db1f92cf-f3a0-40a5-91d8-8fbe70b15948\") " pod="openshift-machine-config-operator/machine-config-server-2mpns" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329167 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c282a9e9-3ef7-41fa-8df8-89ae6aa7f592-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xxz98\" (UID: \"c282a9e9-3ef7-41fa-8df8-89ae6aa7f592\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxz98" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329182 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c282a9e9-3ef7-41fa-8df8-89ae6aa7f592-config\") pod \"kube-apiserver-operator-766d6c64bb-xxz98\" (UID: \"c282a9e9-3ef7-41fa-8df8-89ae6aa7f592\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxz98" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329196 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38ff4075-9f86-4457-af3b-a1db2ea74bf7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5k24\" (UID: \"38ff4075-9f86-4457-af3b-a1db2ea74bf7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5k24" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329210 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/db1f92cf-f3a0-40a5-91d8-8fbe70b15948-certs\") pod \"machine-config-server-2mpns\" (UID: \"db1f92cf-f3a0-40a5-91d8-8fbe70b15948\") " pod="openshift-machine-config-operator/machine-config-server-2mpns" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329226 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95sfn\" (UniqueName: \"kubernetes.io/projected/4b8469b6-c8dd-4b39-81cb-ab35657cdfb5-kube-api-access-95sfn\") pod \"cluster-image-registry-operator-dc59b4c8b-2f4ks\" (UID: \"4b8469b6-c8dd-4b39-81cb-ab35657cdfb5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2f4ks" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329241 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/356c2f0d-a078-42e4-925a-e4f39864eb48-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gmlb4\" (UID: \"356c2f0d-a078-42e4-925a-e4f39864eb48\") " pod="openshift-marketplace/marketplace-operator-79b997595-gmlb4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329258 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6978z\" (UniqueName: \"kubernetes.io/projected/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-kube-api-access-6978z\") pod \"route-controller-manager-6576b87f9c-g6ljh\" (UID: \"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329274 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329289 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8c481089-1d82-466f-b015-541a729f07b7-stats-auth\") pod \"router-default-5444994796-zsscn\" (UID: \"8c481089-1d82-466f-b015-541a729f07b7\") " pod="openshift-ingress/router-default-5444994796-zsscn" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329305 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/93a93c24-45ae-43e1-9349-7471c6a218f8-encryption-config\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329320 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7b44e76-07c3-4876-965a-d7fe4c3c8e41-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-chjp6\" (UID: \"f7b44e76-07c3-4876-965a-d7fe4c3c8e41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-chjp6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329335 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7b44e76-07c3-4876-965a-d7fe4c3c8e41-service-ca-bundle\") pod \"authentication-operator-69f744f599-chjp6\" (UID: \"f7b44e76-07c3-4876-965a-d7fe4c3c8e41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-chjp6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329349 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7f15aff4-f714-4cc5-a551-36653ade84e4-proxy-tls\") pod \"machine-config-operator-74547568cd-gw4sz\" (UID: \"7f15aff4-f714-4cc5-a551-36653ade84e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gw4sz" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329366 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93a93c24-45ae-43e1-9349-7471c6a218f8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329382 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9a043ce0-300f-49b2-9f8f-30497aa70426-socket-dir\") pod \"csi-hostpathplugin-n87bt\" (UID: \"9a043ce0-300f-49b2-9f8f-30497aa70426\") " pod="hostpath-provisioner/csi-hostpathplugin-n87bt" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329399 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329420 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/93a93c24-45ae-43e1-9349-7471c6a218f8-node-pullsecrets\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329436 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnt4q\" (UniqueName: \"kubernetes.io/projected/e579b64c-823d-4798-81c7-52c515a4f9f1-kube-api-access-hnt4q\") pod \"cluster-samples-operator-665b6dd947-2tpfm\" (UID: \"e579b64c-823d-4798-81c7-52c515a4f9f1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2tpfm" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329452 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ggtl\" (UniqueName: \"kubernetes.io/projected/8c481089-1d82-466f-b015-541a729f07b7-kube-api-access-8ggtl\") pod \"router-default-5444994796-zsscn\" (UID: \"8c481089-1d82-466f-b015-541a729f07b7\") " pod="openshift-ingress/router-default-5444994796-zsscn" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329477 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b5fec51d-bef3-426c-ba74-90a48a94d9ce-ca-trust-extracted\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329492 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8c481089-1d82-466f-b015-541a729f07b7-default-certificate\") pod \"router-default-5444994796-zsscn\" (UID: \"8c481089-1d82-466f-b015-541a729f07b7\") " pod="openshift-ingress/router-default-5444994796-zsscn" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329506 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b57e8bb9-165e-44d6-8ca2-f14462dd7d37-config\") pod \"service-ca-operator-777779d784-lxxz8\" (UID: \"b57e8bb9-165e-44d6-8ca2-f14462dd7d37\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxxz8" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329522 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/20e67d3a-290d-4ac2-b3ef-69330f127f52-encryption-config\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329537 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c481089-1d82-466f-b015-541a729f07b7-service-ca-bundle\") pod \"router-default-5444994796-zsscn\" (UID: \"8c481089-1d82-466f-b015-541a729f07b7\") " pod="openshift-ingress/router-default-5444994796-zsscn" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329552 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/72c37509-1c20-49e9-9a41-49a79f90a2e2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-42nbd\" (UID: \"72c37509-1c20-49e9-9a41-49a79f90a2e2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42nbd" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329567 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z87kv\" (UniqueName: \"kubernetes.io/projected/9a043ce0-300f-49b2-9f8f-30497aa70426-kube-api-access-z87kv\") pod \"csi-hostpathplugin-n87bt\" (UID: \"9a043ce0-300f-49b2-9f8f-30497aa70426\") " pod="hostpath-provisioner/csi-hostpathplugin-n87bt" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329583 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5fec51d-bef3-426c-ba74-90a48a94d9ce-bound-sa-token\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329598 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b8469b6-c8dd-4b39-81cb-ab35657cdfb5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2f4ks\" (UID: \"4b8469b6-c8dd-4b39-81cb-ab35657cdfb5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2f4ks" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329615 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da-trusted-ca\") pod \"ingress-operator-5b745b69d9-jn5vx\" (UID: \"ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jn5vx" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329843 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/93a93c24-45ae-43e1-9349-7471c6a218f8-etcd-client\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.329476 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1aadfdc4-1cb5-432f-ab33-81df64ceb763-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-n42q4\" (UID: \"1aadfdc4-1cb5-432f-ab33-81df64ceb763\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n42q4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.334379 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-client-ca\") pod \"route-controller-manager-6576b87f9c-g6ljh\" (UID: \"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.334839 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/82c1511d-7bf8-4245-9593-f2de287b9dd6-auth-proxy-config\") pod \"machine-approver-56656f9798-54wv6\" (UID: \"82c1511d-7bf8-4245-9593-f2de287b9dd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-54wv6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.335081 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3dddee8f-f24e-4777-ba27-418ef5ef30c5-serving-cert\") pod \"etcd-operator-b45778765-pzjgw\" (UID: \"3dddee8f-f24e-4777-ba27-418ef5ef30c5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzjgw" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.335289 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5fec51d-bef3-426c-ba74-90a48a94d9ce-trusted-ca\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.335349 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-trusted-ca-bundle\") pod \"console-f9d7485db-t4hjr\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.335418 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/82c1511d-7bf8-4245-9593-f2de287b9dd6-machine-approver-tls\") pod \"machine-approver-56656f9798-54wv6\" (UID: \"82c1511d-7bf8-4245-9593-f2de287b9dd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-54wv6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.335569 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5886f6e5-57aa-4e8d-adb5-a0cb9f1648db-trusted-ca\") pod \"console-operator-58897d9998-6v7lm\" (UID: \"5886f6e5-57aa-4e8d-adb5-a0cb9f1648db\") " pod="openshift-console-operator/console-operator-58897d9998-6v7lm" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.335674 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7b44e76-07c3-4876-965a-d7fe4c3c8e41-service-ca-bundle\") pod \"authentication-operator-69f744f599-chjp6\" (UID: \"f7b44e76-07c3-4876-965a-d7fe4c3c8e41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-chjp6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.336082 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.336246 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.336669 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93a93c24-45ae-43e1-9349-7471c6a218f8-serving-cert\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.336727 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c481089-1d82-466f-b015-541a729f07b7-metrics-certs\") pod \"router-default-5444994796-zsscn\" (UID: \"8c481089-1d82-466f-b015-541a729f07b7\") " pod="openshift-ingress/router-default-5444994796-zsscn" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.336952 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3dddee8f-f24e-4777-ba27-418ef5ef30c5-config\") pod \"etcd-operator-b45778765-pzjgw\" (UID: \"3dddee8f-f24e-4777-ba27-418ef5ef30c5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzjgw" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.337145 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-console-oauth-config\") pod \"console-f9d7485db-t4hjr\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.337322 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/20e67d3a-290d-4ac2-b3ef-69330f127f52-etcd-client\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.337387 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.337705 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7b44e76-07c3-4876-965a-d7fe4c3c8e41-serving-cert\") pod \"authentication-operator-69f744f599-chjp6\" (UID: \"f7b44e76-07c3-4876-965a-d7fe4c3c8e41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-chjp6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.337817 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93a93c24-45ae-43e1-9349-7471c6a218f8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.337864 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe92dbc-47df-44cc-bd99-123052631ff4-config\") pod \"kube-controller-manager-operator-78b949d7b-nxq92\" (UID: \"dfe92dbc-47df-44cc-bd99-123052631ff4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nxq92" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.337930 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.338116 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3dddee8f-f24e-4777-ba27-418ef5ef30c5-etcd-client\") pod \"etcd-operator-b45778765-pzjgw\" (UID: \"3dddee8f-f24e-4777-ba27-418ef5ef30c5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzjgw" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.338230 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfe92dbc-47df-44cc-bd99-123052631ff4-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nxq92\" (UID: \"dfe92dbc-47df-44cc-bd99-123052631ff4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nxq92" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.338356 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.338581 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.338645 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/93a93c24-45ae-43e1-9349-7471c6a218f8-node-pullsecrets\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.338394 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0677773-5515-4b74-9975-75dc72e6a127-serving-cert\") pod \"controller-manager-879f6c89f-j7krr\" (UID: \"a0677773-5515-4b74-9975-75dc72e6a127\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.338659 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-console-config\") pod \"console-f9d7485db-t4hjr\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.339252 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-audit-policies\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.339307 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-service-ca\") pod \"console-f9d7485db-t4hjr\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.339403 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/93a93c24-45ae-43e1-9349-7471c6a218f8-audit-dir\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.339497 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b5fec51d-bef3-426c-ba74-90a48a94d9ce-registry-certificates\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.339572 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b5fec51d-bef3-426c-ba74-90a48a94d9ce-ca-trust-extracted\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.339599 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/20e67d3a-290d-4ac2-b3ef-69330f127f52-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.340031 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b5fec51d-bef3-426c-ba74-90a48a94d9ce-installation-pull-secrets\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.340081 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.340482 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0677773-5515-4b74-9975-75dc72e6a127-config\") pod \"controller-manager-879f6c89f-j7krr\" (UID: \"a0677773-5515-4b74-9975-75dc72e6a127\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.340574 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-oauth-serving-cert\") pod \"console-f9d7485db-t4hjr\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.341100 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0677773-5515-4b74-9975-75dc72e6a127-client-ca\") pod \"controller-manager-879f6c89f-j7krr\" (UID: \"a0677773-5515-4b74-9975-75dc72e6a127\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.341119 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c481089-1d82-466f-b015-541a729f07b7-service-ca-bundle\") pod \"router-default-5444994796-zsscn\" (UID: \"8c481089-1d82-466f-b015-541a729f07b7\") " pod="openshift-ingress/router-default-5444994796-zsscn" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.341802 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c282a9e9-3ef7-41fa-8df8-89ae6aa7f592-config\") pod \"kube-apiserver-operator-766d6c64bb-xxz98\" (UID: \"c282a9e9-3ef7-41fa-8df8-89ae6aa7f592\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxz98" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.341967 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da-trusted-ca\") pod \"ingress-operator-5b745b69d9-jn5vx\" (UID: \"ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jn5vx" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.342003 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b8469b6-c8dd-4b39-81cb-ab35657cdfb5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-2f4ks\" (UID: \"4b8469b6-c8dd-4b39-81cb-ab35657cdfb5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2f4ks" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.342294 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/72c37509-1c20-49e9-9a41-49a79f90a2e2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-42nbd\" (UID: \"72c37509-1c20-49e9-9a41-49a79f90a2e2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42nbd" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.343412 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7b44e76-07c3-4876-965a-d7fe4c3c8e41-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-chjp6\" (UID: \"f7b44e76-07c3-4876-965a-d7fe4c3c8e41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-chjp6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.343555 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e579b64c-823d-4798-81c7-52c515a4f9f1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-2tpfm\" (UID: \"e579b64c-823d-4798-81c7-52c515a4f9f1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2tpfm" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.344174 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-console-serving-cert\") pod \"console-f9d7485db-t4hjr\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.345065 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.345083 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.345924 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38ff4075-9f86-4457-af3b-a1db2ea74bf7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5k24\" (UID: \"38ff4075-9f86-4457-af3b-a1db2ea74bf7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5k24" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.345933 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8c481089-1d82-466f-b015-541a729f07b7-stats-auth\") pod \"router-default-5444994796-zsscn\" (UID: \"8c481089-1d82-466f-b015-541a729f07b7\") " pod="openshift-ingress/router-default-5444994796-zsscn" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.346049 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/20e67d3a-290d-4ac2-b3ef-69330f127f52-encryption-config\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.347360 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/93a93c24-45ae-43e1-9349-7471c6a218f8-encryption-config\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.347398 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5886f6e5-57aa-4e8d-adb5-a0cb9f1648db-serving-cert\") pod \"console-operator-58897d9998-6v7lm\" (UID: \"5886f6e5-57aa-4e8d-adb5-a0cb9f1648db\") " pod="openshift-console-operator/console-operator-58897d9998-6v7lm" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.347548 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8c481089-1d82-466f-b015-541a729f07b7-default-certificate\") pod \"router-default-5444994796-zsscn\" (UID: \"8c481089-1d82-466f-b015-541a729f07b7\") " pod="openshift-ingress/router-default-5444994796-zsscn" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.382609 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9kmm\" (UniqueName: \"kubernetes.io/projected/dc21f1de-fae3-4d9a-862e-e33257b4f177-kube-api-access-g9kmm\") pod \"downloads-7954f5f757-jfp56\" (UID: \"dc21f1de-fae3-4d9a-862e-e33257b4f177\") " pod="openshift-console/downloads-7954f5f757-jfp56" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.402944 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da-bound-sa-token\") pod \"ingress-operator-5b745b69d9-jn5vx\" (UID: \"ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jn5vx" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.422857 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktqsk\" (UniqueName: \"kubernetes.io/projected/b5fec51d-bef3-426c-ba74-90a48a94d9ce-kube-api-access-ktqsk\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430142 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:29 crc kubenswrapper[4796]: E1205 10:29:29.430289 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:29.930257571 +0000 UTC m=+116.218363083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430327 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3fec5c26-b7fd-4f6f-aa94-503312e81cdb-srv-cert\") pod \"catalog-operator-68c6474976-5b8dg\" (UID: \"3fec5c26-b7fd-4f6f-aa94-503312e81cdb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5b8dg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430353 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f9ebd86c-0c38-4954-8c59-a4e0168fb2d5-secret-volume\") pod \"collect-profiles-29415495-lj8xl\" (UID: \"f9ebd86c-0c38-4954-8c59-a4e0168fb2d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415495-lj8xl" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430374 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/356c2f0d-a078-42e4-925a-e4f39864eb48-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gmlb4\" (UID: \"356c2f0d-a078-42e4-925a-e4f39864eb48\") " pod="openshift-marketplace/marketplace-operator-79b997595-gmlb4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430391 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz4mw\" (UniqueName: \"kubernetes.io/projected/b627bdb1-a428-4272-b9b4-2732a8cb4b2c-kube-api-access-wz4mw\") pod \"ingress-canary-hktfd\" (UID: \"b627bdb1-a428-4272-b9b4-2732a8cb4b2c\") " pod="openshift-ingress-canary/ingress-canary-hktfd" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430409 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/766637c3-bd89-4f55-950e-68c553a5c6a4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dktck\" (UID: \"766637c3-bd89-4f55-950e-68c553a5c6a4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dktck" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430424 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9ebd86c-0c38-4954-8c59-a4e0168fb2d5-config-volume\") pod \"collect-profiles-29415495-lj8xl\" (UID: \"f9ebd86c-0c38-4954-8c59-a4e0168fb2d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415495-lj8xl" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430438 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/407028a3-3d79-420a-9dcf-751bb138e022-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gzkjl\" (UID: \"407028a3-3d79-420a-9dcf-751bb138e022\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzkjl" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430470 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a842f36-0393-4416-b05b-9caa7959e8d8-webhook-cert\") pod \"packageserver-d55dfcdfc-ngdm4\" (UID: \"1a842f36-0393-4416-b05b-9caa7959e8d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ngdm4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430495 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77837609-281b-417a-8398-7732463eb92a-config\") pod \"machine-api-operator-5694c8668f-z2zft\" (UID: \"77837609-281b-417a-8398-7732463eb92a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2zft" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430510 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhltr\" (UniqueName: \"kubernetes.io/projected/ed98b6c8-708c-4282-b1fe-4064fd60f446-kube-api-access-jhltr\") pod \"service-ca-9c57cc56f-6ftq2\" (UID: \"ed98b6c8-708c-4282-b1fe-4064fd60f446\") " pod="openshift-service-ca/service-ca-9c57cc56f-6ftq2" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430525 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtxgs\" (UniqueName: \"kubernetes.io/projected/a5c7cd05-933f-4424-92c8-5a7023694cc1-kube-api-access-mtxgs\") pod \"olm-operator-6b444d44fb-r45ng\" (UID: \"a5c7cd05-933f-4424-92c8-5a7023694cc1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r45ng" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430549 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t9z7\" (UniqueName: \"kubernetes.io/projected/356c2f0d-a078-42e4-925a-e4f39864eb48-kube-api-access-6t9z7\") pod \"marketplace-operator-79b997595-gmlb4\" (UID: \"356c2f0d-a078-42e4-925a-e4f39864eb48\") " pod="openshift-marketplace/marketplace-operator-79b997595-gmlb4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430566 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8knpz\" (UniqueName: \"kubernetes.io/projected/766637c3-bd89-4f55-950e-68c553a5c6a4-kube-api-access-8knpz\") pod \"control-plane-machine-set-operator-78cbb6b69f-dktck\" (UID: \"766637c3-bd89-4f55-950e-68c553a5c6a4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dktck" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430583 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99tcd\" (UniqueName: \"kubernetes.io/projected/77837609-281b-417a-8398-7732463eb92a-kube-api-access-99tcd\") pod \"machine-api-operator-5694c8668f-z2zft\" (UID: \"77837609-281b-417a-8398-7732463eb92a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2zft" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430598 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9a043ce0-300f-49b2-9f8f-30497aa70426-plugins-dir\") pod \"csi-hostpathplugin-n87bt\" (UID: \"9a043ce0-300f-49b2-9f8f-30497aa70426\") " pod="hostpath-provisioner/csi-hostpathplugin-n87bt" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430625 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hp7t\" (UniqueName: \"kubernetes.io/projected/f9ebd86c-0c38-4954-8c59-a4e0168fb2d5-kube-api-access-4hp7t\") pod \"collect-profiles-29415495-lj8xl\" (UID: \"f9ebd86c-0c38-4954-8c59-a4e0168fb2d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415495-lj8xl" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430650 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ed98b6c8-708c-4282-b1fe-4064fd60f446-signing-key\") pod \"service-ca-9c57cc56f-6ftq2\" (UID: \"ed98b6c8-708c-4282-b1fe-4064fd60f446\") " pod="openshift-service-ca/service-ca-9c57cc56f-6ftq2" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430665 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5c7cd05-933f-4424-92c8-5a7023694cc1-srv-cert\") pod \"olm-operator-6b444d44fb-r45ng\" (UID: \"a5c7cd05-933f-4424-92c8-5a7023694cc1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r45ng" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430698 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg2rc\" (UniqueName: \"kubernetes.io/projected/b57e8bb9-165e-44d6-8ca2-f14462dd7d37-kube-api-access-bg2rc\") pod \"service-ca-operator-777779d784-lxxz8\" (UID: \"b57e8bb9-165e-44d6-8ca2-f14462dd7d37\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxxz8" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430714 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a842f36-0393-4416-b05b-9caa7959e8d8-apiservice-cert\") pod \"packageserver-d55dfcdfc-ngdm4\" (UID: \"1a842f36-0393-4416-b05b-9caa7959e8d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ngdm4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430733 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7f15aff4-f714-4cc5-a551-36653ade84e4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gw4sz\" (UID: \"7f15aff4-f714-4cc5-a551-36653ade84e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gw4sz" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430746 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b57e8bb9-165e-44d6-8ca2-f14462dd7d37-serving-cert\") pod \"service-ca-operator-777779d784-lxxz8\" (UID: \"b57e8bb9-165e-44d6-8ca2-f14462dd7d37\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxxz8" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430775 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp6rj\" (UniqueName: \"kubernetes.io/projected/ae18ea6b-e8c4-46e8-b9df-49944b44a999-kube-api-access-fp6rj\") pod \"package-server-manager-789f6589d5-blx8z\" (UID: \"ae18ea6b-e8c4-46e8-b9df-49944b44a999\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blx8z" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430798 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac76ab3e-e33f-4a64-8f67-b65675369db5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wzr5n\" (UID: \"ac76ab3e-e33f-4a64-8f67-b65675369db5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wzr5n" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430819 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/010954d4-08ed-47a9-a504-8149788b8b65-config-volume\") pod \"dns-default-9fpsf\" (UID: \"010954d4-08ed-47a9-a504-8149788b8b65\") " pod="openshift-dns/dns-default-9fpsf" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430834 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b627bdb1-a428-4272-b9b4-2732a8cb4b2c-cert\") pod \"ingress-canary-hktfd\" (UID: \"b627bdb1-a428-4272-b9b4-2732a8cb4b2c\") " pod="openshift-ingress-canary/ingress-canary-hktfd" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430855 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/db1f92cf-f3a0-40a5-91d8-8fbe70b15948-node-bootstrap-token\") pod \"machine-config-server-2mpns\" (UID: \"db1f92cf-f3a0-40a5-91d8-8fbe70b15948\") " pod="openshift-machine-config-operator/machine-config-server-2mpns" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430873 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1a842f36-0393-4416-b05b-9caa7959e8d8-tmpfs\") pod \"packageserver-d55dfcdfc-ngdm4\" (UID: \"1a842f36-0393-4416-b05b-9caa7959e8d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ngdm4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430894 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84wzp\" (UniqueName: \"kubernetes.io/projected/de6d4145-d6ed-46d6-980f-07305bfd7933-kube-api-access-84wzp\") pod \"migrator-59844c95c7-l6k4p\" (UID: \"de6d4145-d6ed-46d6-980f-07305bfd7933\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l6k4p" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430908 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/010954d4-08ed-47a9-a504-8149788b8b65-metrics-tls\") pod \"dns-default-9fpsf\" (UID: \"010954d4-08ed-47a9-a504-8149788b8b65\") " pod="openshift-dns/dns-default-9fpsf" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430925 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-574tq\" (UniqueName: \"kubernetes.io/projected/a093a74f-f4f8-43ff-9f16-232761ad58e0-kube-api-access-574tq\") pod \"multus-admission-controller-857f4d67dd-w7mdv\" (UID: \"a093a74f-f4f8-43ff-9f16-232761ad58e0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w7mdv" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430943 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/77837609-281b-417a-8398-7732463eb92a-images\") pod \"machine-api-operator-5694c8668f-z2zft\" (UID: \"77837609-281b-417a-8398-7732463eb92a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2zft" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430959 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/77837609-281b-417a-8398-7732463eb92a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z2zft\" (UID: \"77837609-281b-417a-8398-7732463eb92a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2zft" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430974 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5c7cd05-933f-4424-92c8-5a7023694cc1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-r45ng\" (UID: \"a5c7cd05-933f-4424-92c8-5a7023694cc1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r45ng" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.430991 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtrkv\" (UniqueName: \"kubernetes.io/projected/1a842f36-0393-4416-b05b-9caa7959e8d8-kube-api-access-vtrkv\") pod \"packageserver-d55dfcdfc-ngdm4\" (UID: \"1a842f36-0393-4416-b05b-9caa7959e8d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ngdm4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431029 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj6qb\" (UniqueName: \"kubernetes.io/projected/db1f92cf-f3a0-40a5-91d8-8fbe70b15948-kube-api-access-gj6qb\") pod \"machine-config-server-2mpns\" (UID: \"db1f92cf-f3a0-40a5-91d8-8fbe70b15948\") " pod="openshift-machine-config-operator/machine-config-server-2mpns" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431044 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/db1f92cf-f3a0-40a5-91d8-8fbe70b15948-certs\") pod \"machine-config-server-2mpns\" (UID: \"db1f92cf-f3a0-40a5-91d8-8fbe70b15948\") " pod="openshift-machine-config-operator/machine-config-server-2mpns" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431065 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/356c2f0d-a078-42e4-925a-e4f39864eb48-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gmlb4\" (UID: \"356c2f0d-a078-42e4-925a-e4f39864eb48\") " pod="openshift-marketplace/marketplace-operator-79b997595-gmlb4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431087 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7f15aff4-f714-4cc5-a551-36653ade84e4-proxy-tls\") pod \"machine-config-operator-74547568cd-gw4sz\" (UID: \"7f15aff4-f714-4cc5-a551-36653ade84e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gw4sz" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431104 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9a043ce0-300f-49b2-9f8f-30497aa70426-socket-dir\") pod \"csi-hostpathplugin-n87bt\" (UID: \"9a043ce0-300f-49b2-9f8f-30497aa70426\") " pod="hostpath-provisioner/csi-hostpathplugin-n87bt" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431140 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b57e8bb9-165e-44d6-8ca2-f14462dd7d37-config\") pod \"service-ca-operator-777779d784-lxxz8\" (UID: \"b57e8bb9-165e-44d6-8ca2-f14462dd7d37\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxxz8" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431161 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z87kv\" (UniqueName: \"kubernetes.io/projected/9a043ce0-300f-49b2-9f8f-30497aa70426-kube-api-access-z87kv\") pod \"csi-hostpathplugin-n87bt\" (UID: \"9a043ce0-300f-49b2-9f8f-30497aa70426\") " pod="hostpath-provisioner/csi-hostpathplugin-n87bt" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431186 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac76ab3e-e33f-4a64-8f67-b65675369db5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wzr5n\" (UID: \"ac76ab3e-e33f-4a64-8f67-b65675369db5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wzr5n" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431200 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/407028a3-3d79-420a-9dcf-751bb138e022-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gzkjl\" (UID: \"407028a3-3d79-420a-9dcf-751bb138e022\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzkjl" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431215 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae18ea6b-e8c4-46e8-b9df-49944b44a999-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-blx8z\" (UID: \"ae18ea6b-e8c4-46e8-b9df-49944b44a999\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blx8z" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431232 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vgjg\" (UniqueName: \"kubernetes.io/projected/3fec5c26-b7fd-4f6f-aa94-503312e81cdb-kube-api-access-2vgjg\") pod \"catalog-operator-68c6474976-5b8dg\" (UID: \"3fec5c26-b7fd-4f6f-aa94-503312e81cdb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5b8dg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431248 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvswd\" (UniqueName: \"kubernetes.io/projected/407028a3-3d79-420a-9dcf-751bb138e022-kube-api-access-mvswd\") pod \"kube-storage-version-migrator-operator-b67b599dd-gzkjl\" (UID: \"407028a3-3d79-420a-9dcf-751bb138e022\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzkjl" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431264 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn9tx\" (UniqueName: \"kubernetes.io/projected/010954d4-08ed-47a9-a504-8149788b8b65-kube-api-access-bn9tx\") pod \"dns-default-9fpsf\" (UID: \"010954d4-08ed-47a9-a504-8149788b8b65\") " pod="openshift-dns/dns-default-9fpsf" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431282 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ed98b6c8-708c-4282-b1fe-4064fd60f446-signing-cabundle\") pod \"service-ca-9c57cc56f-6ftq2\" (UID: \"ed98b6c8-708c-4282-b1fe-4064fd60f446\") " pod="openshift-service-ca/service-ca-9c57cc56f-6ftq2" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431296 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac76ab3e-e33f-4a64-8f67-b65675369db5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wzr5n\" (UID: \"ac76ab3e-e33f-4a64-8f67-b65675369db5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wzr5n" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431328 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431344 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3fec5c26-b7fd-4f6f-aa94-503312e81cdb-profile-collector-cert\") pod \"catalog-operator-68c6474976-5b8dg\" (UID: \"3fec5c26-b7fd-4f6f-aa94-503312e81cdb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5b8dg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431359 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a093a74f-f4f8-43ff-9f16-232761ad58e0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-w7mdv\" (UID: \"a093a74f-f4f8-43ff-9f16-232761ad58e0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w7mdv" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431376 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9a043ce0-300f-49b2-9f8f-30497aa70426-registration-dir\") pod \"csi-hostpathplugin-n87bt\" (UID: \"9a043ce0-300f-49b2-9f8f-30497aa70426\") " pod="hostpath-provisioner/csi-hostpathplugin-n87bt" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431401 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgb6p\" (UniqueName: \"kubernetes.io/projected/7f15aff4-f714-4cc5-a551-36653ade84e4-kube-api-access-vgb6p\") pod \"machine-config-operator-74547568cd-gw4sz\" (UID: \"7f15aff4-f714-4cc5-a551-36653ade84e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gw4sz" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431415 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9a043ce0-300f-49b2-9f8f-30497aa70426-mountpoint-dir\") pod \"csi-hostpathplugin-n87bt\" (UID: \"9a043ce0-300f-49b2-9f8f-30497aa70426\") " pod="hostpath-provisioner/csi-hostpathplugin-n87bt" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431448 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9a043ce0-300f-49b2-9f8f-30497aa70426-csi-data-dir\") pod \"csi-hostpathplugin-n87bt\" (UID: \"9a043ce0-300f-49b2-9f8f-30497aa70426\") " pod="hostpath-provisioner/csi-hostpathplugin-n87bt" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431477 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7f15aff4-f714-4cc5-a551-36653ade84e4-images\") pod \"machine-config-operator-74547568cd-gw4sz\" (UID: \"7f15aff4-f714-4cc5-a551-36653ade84e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gw4sz" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.431840 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9a043ce0-300f-49b2-9f8f-30497aa70426-mountpoint-dir\") pod \"csi-hostpathplugin-n87bt\" (UID: \"9a043ce0-300f-49b2-9f8f-30497aa70426\") " pod="hostpath-provisioner/csi-hostpathplugin-n87bt" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.432170 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9a043ce0-300f-49b2-9f8f-30497aa70426-plugins-dir\") pod \"csi-hostpathplugin-n87bt\" (UID: \"9a043ce0-300f-49b2-9f8f-30497aa70426\") " pod="hostpath-provisioner/csi-hostpathplugin-n87bt" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.432236 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7f15aff4-f714-4cc5-a551-36653ade84e4-images\") pod \"machine-config-operator-74547568cd-gw4sz\" (UID: \"7f15aff4-f714-4cc5-a551-36653ade84e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gw4sz" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.432532 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9a043ce0-300f-49b2-9f8f-30497aa70426-csi-data-dir\") pod \"csi-hostpathplugin-n87bt\" (UID: \"9a043ce0-300f-49b2-9f8f-30497aa70426\") " pod="hostpath-provisioner/csi-hostpathplugin-n87bt" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.432580 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9a043ce0-300f-49b2-9f8f-30497aa70426-registration-dir\") pod \"csi-hostpathplugin-n87bt\" (UID: \"9a043ce0-300f-49b2-9f8f-30497aa70426\") " pod="hostpath-provisioner/csi-hostpathplugin-n87bt" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.432868 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ed98b6c8-708c-4282-b1fe-4064fd60f446-signing-cabundle\") pod \"service-ca-9c57cc56f-6ftq2\" (UID: \"ed98b6c8-708c-4282-b1fe-4064fd60f446\") " pod="openshift-service-ca/service-ca-9c57cc56f-6ftq2" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.432908 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77837609-281b-417a-8398-7732463eb92a-config\") pod \"machine-api-operator-5694c8668f-z2zft\" (UID: \"77837609-281b-417a-8398-7732463eb92a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2zft" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.433327 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9ebd86c-0c38-4954-8c59-a4e0168fb2d5-config-volume\") pod \"collect-profiles-29415495-lj8xl\" (UID: \"f9ebd86c-0c38-4954-8c59-a4e0168fb2d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415495-lj8xl" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.433485 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ed98b6c8-708c-4282-b1fe-4064fd60f446-signing-key\") pod \"service-ca-9c57cc56f-6ftq2\" (UID: \"ed98b6c8-708c-4282-b1fe-4064fd60f446\") " pod="openshift-service-ca/service-ca-9c57cc56f-6ftq2" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.433572 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b57e8bb9-165e-44d6-8ca2-f14462dd7d37-config\") pod \"service-ca-operator-777779d784-lxxz8\" (UID: \"b57e8bb9-165e-44d6-8ca2-f14462dd7d37\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxxz8" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.433587 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-jfp56" Dec 05 10:29:29 crc kubenswrapper[4796]: E1205 10:29:29.433855 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:29.933842918 +0000 UTC m=+116.221948431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.434160 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/407028a3-3d79-420a-9dcf-751bb138e022-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-gzkjl\" (UID: \"407028a3-3d79-420a-9dcf-751bb138e022\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzkjl" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.434405 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac76ab3e-e33f-4a64-8f67-b65675369db5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wzr5n\" (UID: \"ac76ab3e-e33f-4a64-8f67-b65675369db5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wzr5n" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.434509 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/77837609-281b-417a-8398-7732463eb92a-images\") pod \"machine-api-operator-5694c8668f-z2zft\" (UID: \"77837609-281b-417a-8398-7732463eb92a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2zft" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.434751 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/010954d4-08ed-47a9-a504-8149788b8b65-metrics-tls\") pod \"dns-default-9fpsf\" (UID: \"010954d4-08ed-47a9-a504-8149788b8b65\") " pod="openshift-dns/dns-default-9fpsf" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.435270 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/010954d4-08ed-47a9-a504-8149788b8b65-config-volume\") pod \"dns-default-9fpsf\" (UID: \"010954d4-08ed-47a9-a504-8149788b8b65\") " pod="openshift-dns/dns-default-9fpsf" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.435447 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1a842f36-0393-4416-b05b-9caa7959e8d8-tmpfs\") pod \"packageserver-d55dfcdfc-ngdm4\" (UID: \"1a842f36-0393-4416-b05b-9caa7959e8d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ngdm4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.435461 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9a043ce0-300f-49b2-9f8f-30497aa70426-socket-dir\") pod \"csi-hostpathplugin-n87bt\" (UID: \"9a043ce0-300f-49b2-9f8f-30497aa70426\") " pod="hostpath-provisioner/csi-hostpathplugin-n87bt" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.435642 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a093a74f-f4f8-43ff-9f16-232761ad58e0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-w7mdv\" (UID: \"a093a74f-f4f8-43ff-9f16-232761ad58e0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w7mdv" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.435886 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/407028a3-3d79-420a-9dcf-751bb138e022-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-gzkjl\" (UID: \"407028a3-3d79-420a-9dcf-751bb138e022\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzkjl" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.435923 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3fec5c26-b7fd-4f6f-aa94-503312e81cdb-profile-collector-cert\") pod \"catalog-operator-68c6474976-5b8dg\" (UID: \"3fec5c26-b7fd-4f6f-aa94-503312e81cdb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5b8dg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.436128 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7f15aff4-f714-4cc5-a551-36653ade84e4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gw4sz\" (UID: \"7f15aff4-f714-4cc5-a551-36653ade84e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gw4sz" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.436305 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/db1f92cf-f3a0-40a5-91d8-8fbe70b15948-certs\") pod \"machine-config-server-2mpns\" (UID: \"db1f92cf-f3a0-40a5-91d8-8fbe70b15948\") " pod="openshift-machine-config-operator/machine-config-server-2mpns" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.436445 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1a842f36-0393-4416-b05b-9caa7959e8d8-webhook-cert\") pod \"packageserver-d55dfcdfc-ngdm4\" (UID: \"1a842f36-0393-4416-b05b-9caa7959e8d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ngdm4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.436567 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/356c2f0d-a078-42e4-925a-e4f39864eb48-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gmlb4\" (UID: \"356c2f0d-a078-42e4-925a-e4f39864eb48\") " pod="openshift-marketplace/marketplace-operator-79b997595-gmlb4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.436676 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/77837609-281b-417a-8398-7732463eb92a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z2zft\" (UID: \"77837609-281b-417a-8398-7732463eb92a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2zft" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.436757 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/db1f92cf-f3a0-40a5-91d8-8fbe70b15948-node-bootstrap-token\") pod \"machine-config-server-2mpns\" (UID: \"db1f92cf-f3a0-40a5-91d8-8fbe70b15948\") " pod="openshift-machine-config-operator/machine-config-server-2mpns" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.436984 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b627bdb1-a428-4272-b9b4-2732a8cb4b2c-cert\") pod \"ingress-canary-hktfd\" (UID: \"b627bdb1-a428-4272-b9b4-2732a8cb4b2c\") " pod="openshift-ingress-canary/ingress-canary-hktfd" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.437460 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b57e8bb9-165e-44d6-8ca2-f14462dd7d37-serving-cert\") pod \"service-ca-operator-777779d784-lxxz8\" (UID: \"b57e8bb9-165e-44d6-8ca2-f14462dd7d37\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxxz8" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.437589 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac76ab3e-e33f-4a64-8f67-b65675369db5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wzr5n\" (UID: \"ac76ab3e-e33f-4a64-8f67-b65675369db5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wzr5n" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.437762 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1a842f36-0393-4416-b05b-9caa7959e8d8-apiservice-cert\") pod \"packageserver-d55dfcdfc-ngdm4\" (UID: \"1a842f36-0393-4416-b05b-9caa7959e8d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ngdm4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.437998 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a5c7cd05-933f-4424-92c8-5a7023694cc1-srv-cert\") pod \"olm-operator-6b444d44fb-r45ng\" (UID: \"a5c7cd05-933f-4424-92c8-5a7023694cc1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r45ng" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.438029 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f9ebd86c-0c38-4954-8c59-a4e0168fb2d5-secret-volume\") pod \"collect-profiles-29415495-lj8xl\" (UID: \"f9ebd86c-0c38-4954-8c59-a4e0168fb2d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415495-lj8xl" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.438507 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7f15aff4-f714-4cc5-a551-36653ade84e4-proxy-tls\") pod \"machine-config-operator-74547568cd-gw4sz\" (UID: \"7f15aff4-f714-4cc5-a551-36653ade84e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gw4sz" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.438509 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3fec5c26-b7fd-4f6f-aa94-503312e81cdb-srv-cert\") pod \"catalog-operator-68c6474976-5b8dg\" (UID: \"3fec5c26-b7fd-4f6f-aa94-503312e81cdb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5b8dg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.438790 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a5c7cd05-933f-4424-92c8-5a7023694cc1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-r45ng\" (UID: \"a5c7cd05-933f-4424-92c8-5a7023694cc1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r45ng" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.438849 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/356c2f0d-a078-42e4-925a-e4f39864eb48-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gmlb4\" (UID: \"356c2f0d-a078-42e4-925a-e4f39864eb48\") " pod="openshift-marketplace/marketplace-operator-79b997595-gmlb4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.439185 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae18ea6b-e8c4-46e8-b9df-49944b44a999-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-blx8z\" (UID: \"ae18ea6b-e8c4-46e8-b9df-49944b44a999\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blx8z" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.440484 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/766637c3-bd89-4f55-950e-68c553a5c6a4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-dktck\" (UID: \"766637c3-bd89-4f55-950e-68c553a5c6a4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dktck" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.445424 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfe92dbc-47df-44cc-bd99-123052631ff4-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nxq92\" (UID: \"dfe92dbc-47df-44cc-bd99-123052631ff4\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nxq92" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.463019 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7xdp\" (UniqueName: \"kubernetes.io/projected/3dddee8f-f24e-4777-ba27-418ef5ef30c5-kube-api-access-q7xdp\") pod \"etcd-operator-b45778765-pzjgw\" (UID: \"3dddee8f-f24e-4777-ba27-418ef5ef30c5\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pzjgw" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.482965 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8zrx\" (UniqueName: \"kubernetes.io/projected/f7b44e76-07c3-4876-965a-d7fe4c3c8e41-kube-api-access-k8zrx\") pod \"authentication-operator-69f744f599-chjp6\" (UID: \"f7b44e76-07c3-4876-965a-d7fe4c3c8e41\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-chjp6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.503310 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b49dp\" (UniqueName: \"kubernetes.io/projected/82c1511d-7bf8-4245-9593-f2de287b9dd6-kube-api-access-b49dp\") pod \"machine-approver-56656f9798-54wv6\" (UID: \"82c1511d-7bf8-4245-9593-f2de287b9dd6\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-54wv6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.523718 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdtgs\" (UniqueName: \"kubernetes.io/projected/56729699-46b2-454c-83d5-9dce9d90ac49-kube-api-access-sdtgs\") pod \"oauth-openshift-558db77b4-qvpdg\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.533053 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:29 crc kubenswrapper[4796]: E1205 10:29:29.533426 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:30.033407311 +0000 UTC m=+116.321512825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.543588 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b8469b6-c8dd-4b39-81cb-ab35657cdfb5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-2f4ks\" (UID: \"4b8469b6-c8dd-4b39-81cb-ab35657cdfb5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2f4ks" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.552538 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-jfp56"] Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.563490 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pww42\" (UniqueName: \"kubernetes.io/projected/ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da-kube-api-access-pww42\") pod \"ingress-operator-5b745b69d9-jn5vx\" (UID: \"ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jn5vx" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.583918 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jh9f\" (UniqueName: \"kubernetes.io/projected/a0677773-5515-4b74-9975-75dc72e6a127-kube-api-access-6jh9f\") pod \"controller-manager-879f6c89f-j7krr\" (UID: \"a0677773-5515-4b74-9975-75dc72e6a127\") " pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.603295 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq646\" (UniqueName: \"kubernetes.io/projected/1b50e1f6-6ac8-4672-a14e-6fe4a6d3b4be-kube-api-access-mq646\") pod \"dns-operator-744455d44c-8hb86\" (UID: \"1b50e1f6-6ac8-4672-a14e-6fe4a6d3b4be\") " pod="openshift-dns-operator/dns-operator-744455d44c-8hb86" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.612311 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.618957 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-chjp6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.623160 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqv56\" (UniqueName: \"kubernetes.io/projected/72c37509-1c20-49e9-9a41-49a79f90a2e2-kube-api-access-hqv56\") pod \"openshift-config-operator-7777fb866f-42nbd\" (UID: \"72c37509-1c20-49e9-9a41-49a79f90a2e2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-42nbd" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.626039 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.633441 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42nbd" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.634566 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: E1205 10:29:29.635053 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:30.134839915 +0000 UTC m=+116.422945419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.642708 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8hb86" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.644705 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9zpp\" (UniqueName: \"kubernetes.io/projected/fd8fbf42-d1ee-4ca8-bb56-491c068bbf4a-kube-api-access-k9zpp\") pod \"machine-config-controller-84d6567774-lwbh6\" (UID: \"fd8fbf42-d1ee-4ca8-bb56-491c068bbf4a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwbh6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.663549 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxmbz\" (UniqueName: \"kubernetes.io/projected/20e67d3a-290d-4ac2-b3ef-69330f127f52-kube-api-access-hxmbz\") pod \"apiserver-7bbb656c7d-pkl5m\" (UID: \"20e67d3a-290d-4ac2-b3ef-69330f127f52\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.670357 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pzjgw" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.674529 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nxq92" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.682644 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfrnx\" (UniqueName: \"kubernetes.io/projected/1aadfdc4-1cb5-432f-ab33-81df64ceb763-kube-api-access-sfrnx\") pod \"openshift-apiserver-operator-796bbdcf4f-n42q4\" (UID: \"1aadfdc4-1cb5-432f-ab33-81df64ceb763\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n42q4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.685912 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-54wv6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.698121 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwbh6" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.707002 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjcn6\" (UniqueName: \"kubernetes.io/projected/38ff4075-9f86-4457-af3b-a1db2ea74bf7-kube-api-access-rjcn6\") pod \"openshift-controller-manager-operator-756b6f6bc6-l5k24\" (UID: \"38ff4075-9f86-4457-af3b-a1db2ea74bf7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5k24" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.715594 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jn5vx" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.721776 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5k24" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.724917 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69qvf\" (UniqueName: \"kubernetes.io/projected/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-kube-api-access-69qvf\") pod \"console-f9d7485db-t4hjr\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.736375 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:29 crc kubenswrapper[4796]: E1205 10:29:29.736510 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:30.236494938 +0000 UTC m=+116.524600451 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.736666 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: E1205 10:29:29.737029 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:30.237021398 +0000 UTC m=+116.525126910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.745364 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ggtl\" (UniqueName: \"kubernetes.io/projected/8c481089-1d82-466f-b015-541a729f07b7-kube-api-access-8ggtl\") pod \"router-default-5444994796-zsscn\" (UID: \"8c481089-1d82-466f-b015-541a729f07b7\") " pod="openshift-ingress/router-default-5444994796-zsscn" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.764599 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnt4q\" (UniqueName: \"kubernetes.io/projected/e579b64c-823d-4798-81c7-52c515a4f9f1-kube-api-access-hnt4q\") pod \"cluster-samples-operator-665b6dd947-2tpfm\" (UID: \"e579b64c-823d-4798-81c7-52c515a4f9f1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2tpfm" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.784957 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxgv4\" (UniqueName: \"kubernetes.io/projected/5886f6e5-57aa-4e8d-adb5-a0cb9f1648db-kube-api-access-mxgv4\") pod \"console-operator-58897d9998-6v7lm\" (UID: \"5886f6e5-57aa-4e8d-adb5-a0cb9f1648db\") " pod="openshift-console-operator/console-operator-58897d9998-6v7lm" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.805797 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbsz9\" (UniqueName: \"kubernetes.io/projected/93a93c24-45ae-43e1-9349-7471c6a218f8-kube-api-access-sbsz9\") pod \"apiserver-76f77b778f-v4phj\" (UID: \"93a93c24-45ae-43e1-9349-7471c6a218f8\") " pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.826046 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95sfn\" (UniqueName: \"kubernetes.io/projected/4b8469b6-c8dd-4b39-81cb-ab35657cdfb5-kube-api-access-95sfn\") pod \"cluster-image-registry-operator-dc59b4c8b-2f4ks\" (UID: \"4b8469b6-c8dd-4b39-81cb-ab35657cdfb5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2f4ks" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.837572 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:29 crc kubenswrapper[4796]: E1205 10:29:29.837772 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:30.337751661 +0000 UTC m=+116.625857174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.838258 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: E1205 10:29:29.838548 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:30.338540093 +0000 UTC m=+116.626645606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.844705 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c282a9e9-3ef7-41fa-8df8-89ae6aa7f592-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xxz98\" (UID: \"c282a9e9-3ef7-41fa-8df8-89ae6aa7f592\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxz98" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.857242 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8hb86"] Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.858432 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.864984 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.865312 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5fec51d-bef3-426c-ba74-90a48a94d9ce-bound-sa-token\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.883103 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6978z\" (UniqueName: \"kubernetes.io/projected/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-kube-api-access-6978z\") pod \"route-controller-manager-6576b87f9c-g6ljh\" (UID: \"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.901109 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n42q4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.925458 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtrkv\" (UniqueName: \"kubernetes.io/projected/1a842f36-0393-4416-b05b-9caa7959e8d8-kube-api-access-vtrkv\") pod \"packageserver-d55dfcdfc-ngdm4\" (UID: \"1a842f36-0393-4416-b05b-9caa7959e8d8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ngdm4" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.939470 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:29 crc kubenswrapper[4796]: E1205 10:29:29.939652 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:30.439624002 +0000 UTC m=+116.727729515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.939862 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:29 crc kubenswrapper[4796]: E1205 10:29:29.940244 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:30.440236803 +0000 UTC m=+116.728342316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.944938 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vgjg\" (UniqueName: \"kubernetes.io/projected/3fec5c26-b7fd-4f6f-aa94-503312e81cdb-kube-api-access-2vgjg\") pod \"catalog-operator-68c6474976-5b8dg\" (UID: \"3fec5c26-b7fd-4f6f-aa94-503312e81cdb\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5b8dg" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.951079 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6v7lm" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.958345 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.959987 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-chjp6"] Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.960606 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-j7krr"] Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.964386 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2tpfm" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.968249 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj6qb\" (UniqueName: \"kubernetes.io/projected/db1f92cf-f3a0-40a5-91d8-8fbe70b15948-kube-api-access-gj6qb\") pod \"machine-config-server-2mpns\" (UID: \"db1f92cf-f3a0-40a5-91d8-8fbe70b15948\") " pod="openshift-machine-config-operator/machine-config-server-2mpns" Dec 05 10:29:29 crc kubenswrapper[4796]: W1205 10:29:29.975516 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7b44e76_07c3_4876_965a_d7fe4c3c8e41.slice/crio-78f5aa21add6359512b2ef03b1edf5d88922354b771e0ef8464943d283f5dc29 WatchSource:0}: Error finding container 78f5aa21add6359512b2ef03b1edf5d88922354b771e0ef8464943d283f5dc29: Status 404 returned error can't find the container with id 78f5aa21add6359512b2ef03b1edf5d88922354b771e0ef8464943d283f5dc29 Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.980023 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxz98" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.984853 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvswd\" (UniqueName: \"kubernetes.io/projected/407028a3-3d79-420a-9dcf-751bb138e022-kube-api-access-mvswd\") pod \"kube-storage-version-migrator-operator-b67b599dd-gzkjl\" (UID: \"407028a3-3d79-420a-9dcf-751bb138e022\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzkjl" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.991924 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zsscn" Dec 05 10:29:29 crc kubenswrapper[4796]: I1205 10:29:29.995034 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v4phj"] Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.003495 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2f4ks" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.011140 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgb6p\" (UniqueName: \"kubernetes.io/projected/7f15aff4-f714-4cc5-a551-36653ade84e4-kube-api-access-vgb6p\") pod \"machine-config-operator-74547568cd-gw4sz\" (UID: \"7f15aff4-f714-4cc5-a551-36653ade84e4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gw4sz" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.026336 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-42nbd"] Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.026656 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t9z7\" (UniqueName: \"kubernetes.io/projected/356c2f0d-a078-42e4-925a-e4f39864eb48-kube-api-access-6t9z7\") pod \"marketplace-operator-79b997595-gmlb4\" (UID: \"356c2f0d-a078-42e4-925a-e4f39864eb48\") " pod="openshift-marketplace/marketplace-operator-79b997595-gmlb4" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.027494 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qvpdg"] Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.045966 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99tcd\" (UniqueName: \"kubernetes.io/projected/77837609-281b-417a-8398-7732463eb92a-kube-api-access-99tcd\") pod \"machine-api-operator-5694c8668f-z2zft\" (UID: \"77837609-281b-417a-8398-7732463eb92a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z2zft" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.048643 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:30 crc kubenswrapper[4796]: E1205 10:29:30.048831 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:30.548812962 +0000 UTC m=+116.836918476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.049158 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:30 crc kubenswrapper[4796]: E1205 10:29:30.050295 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:30.550282374 +0000 UTC m=+116.838387887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.050800 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m"] Dec 05 10:29:30 crc kubenswrapper[4796]: W1205 10:29:30.051285 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72c37509_1c20_49e9_9a41_49a79f90a2e2.slice/crio-8a15fca5613439735e256fa105249baa26724745ffd98b4f3fd30780e5d8a37f WatchSource:0}: Error finding container 8a15fca5613439735e256fa105249baa26724745ffd98b4f3fd30780e5d8a37f: Status 404 returned error can't find the container with id 8a15fca5613439735e256fa105249baa26724745ffd98b4f3fd30780e5d8a37f Dec 05 10:29:30 crc kubenswrapper[4796]: W1205 10:29:30.054762 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56729699_46b2_454c_83d5_9dce9d90ac49.slice/crio-716acb3d87f0fa423b63c8e082d22df1adea00e3b123dc1ff91eec8963682a25 WatchSource:0}: Error finding container 716acb3d87f0fa423b63c8e082d22df1adea00e3b123dc1ff91eec8963682a25: Status 404 returned error can't find the container with id 716acb3d87f0fa423b63c8e082d22df1adea00e3b123dc1ff91eec8963682a25 Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.057260 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gw4sz" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.061959 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzkjl" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.067727 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nxq92"] Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.072902 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz4mw\" (UniqueName: \"kubernetes.io/projected/b627bdb1-a428-4272-b9b4-2732a8cb4b2c-kube-api-access-wz4mw\") pod \"ingress-canary-hktfd\" (UID: \"b627bdb1-a428-4272-b9b4-2732a8cb4b2c\") " pod="openshift-ingress-canary/ingress-canary-hktfd" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.086345 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8knpz\" (UniqueName: \"kubernetes.io/projected/766637c3-bd89-4f55-950e-68c553a5c6a4-kube-api-access-8knpz\") pod \"control-plane-machine-set-operator-78cbb6b69f-dktck\" (UID: \"766637c3-bd89-4f55-950e-68c553a5c6a4\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dktck" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.091735 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n42q4"] Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.094198 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5b8dg" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.100488 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gmlb4" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.106161 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lwbh6"] Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.108129 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn9tx\" (UniqueName: \"kubernetes.io/projected/010954d4-08ed-47a9-a504-8149788b8b65-kube-api-access-bn9tx\") pod \"dns-default-9fpsf\" (UID: \"010954d4-08ed-47a9-a504-8149788b8b65\") " pod="openshift-dns/dns-default-9fpsf" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.114390 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pzjgw"] Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.115879 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ngdm4" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.125022 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hp7t\" (UniqueName: \"kubernetes.io/projected/f9ebd86c-0c38-4954-8c59-a4e0168fb2d5-kube-api-access-4hp7t\") pod \"collect-profiles-29415495-lj8xl\" (UID: \"f9ebd86c-0c38-4954-8c59-a4e0168fb2d5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415495-lj8xl" Dec 05 10:29:30 crc kubenswrapper[4796]: W1205 10:29:30.129958 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfe92dbc_47df_44cc_bd99_123052631ff4.slice/crio-0bf9a65d82d984e44e3dec1bb888af54702bc2ea40729733c2bf8f59ba1e1089 WatchSource:0}: Error finding container 0bf9a65d82d984e44e3dec1bb888af54702bc2ea40729733c2bf8f59ba1e1089: Status 404 returned error can't find the container with id 0bf9a65d82d984e44e3dec1bb888af54702bc2ea40729733c2bf8f59ba1e1089 Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.132199 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9fpsf" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.145106 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5k24"] Dec 05 10:29:30 crc kubenswrapper[4796]: W1205 10:29:30.145438 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd8fbf42_d1ee_4ca8_bb56_491c068bbf4a.slice/crio-5b235b4b3243bb5231443a195935b886bf13105d3b5dae8dfb5787a8e9a32980 WatchSource:0}: Error finding container 5b235b4b3243bb5231443a195935b886bf13105d3b5dae8dfb5787a8e9a32980: Status 404 returned error can't find the container with id 5b235b4b3243bb5231443a195935b886bf13105d3b5dae8dfb5787a8e9a32980 Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.145641 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhltr\" (UniqueName: \"kubernetes.io/projected/ed98b6c8-708c-4282-b1fe-4064fd60f446-kube-api-access-jhltr\") pod \"service-ca-9c57cc56f-6ftq2\" (UID: \"ed98b6c8-708c-4282-b1fe-4064fd60f446\") " pod="openshift-service-ca/service-ca-9c57cc56f-6ftq2" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.145954 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6v7lm"] Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.146848 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-jn5vx"] Dec 05 10:29:30 crc kubenswrapper[4796]: W1205 10:29:30.148788 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dddee8f_f24e_4777_ba27_418ef5ef30c5.slice/crio-e09af963d17abbfa3f737e56cb571a27b373e48735594f4650b5671751d2ccfa WatchSource:0}: Error finding container e09af963d17abbfa3f737e56cb571a27b373e48735594f4650b5671751d2ccfa: Status 404 returned error can't find the container with id e09af963d17abbfa3f737e56cb571a27b373e48735594f4650b5671751d2ccfa Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.150621 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:30 crc kubenswrapper[4796]: E1205 10:29:30.150913 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:30.650891169 +0000 UTC m=+116.938996682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.151018 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:30 crc kubenswrapper[4796]: E1205 10:29:30.151305 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:30.651299117 +0000 UTC m=+116.939404629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.157827 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hktfd" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.161624 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2mpns" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.163210 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtxgs\" (UniqueName: \"kubernetes.io/projected/a5c7cd05-933f-4424-92c8-5a7023694cc1-kube-api-access-mtxgs\") pod \"olm-operator-6b444d44fb-r45ng\" (UID: \"a5c7cd05-933f-4424-92c8-5a7023694cc1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r45ng" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.177883 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.187188 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z87kv\" (UniqueName: \"kubernetes.io/projected/9a043ce0-300f-49b2-9f8f-30497aa70426-kube-api-access-z87kv\") pod \"csi-hostpathplugin-n87bt\" (UID: \"9a043ce0-300f-49b2-9f8f-30497aa70426\") " pod="hostpath-provisioner/csi-hostpathplugin-n87bt" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.210630 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp6rj\" (UniqueName: \"kubernetes.io/projected/ae18ea6b-e8c4-46e8-b9df-49944b44a999-kube-api-access-fp6rj\") pod \"package-server-manager-789f6589d5-blx8z\" (UID: \"ae18ea6b-e8c4-46e8-b9df-49944b44a999\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blx8z" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.230024 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac76ab3e-e33f-4a64-8f67-b65675369db5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wzr5n\" (UID: \"ac76ab3e-e33f-4a64-8f67-b65675369db5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wzr5n" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.242085 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxz98"] Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.250539 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-574tq\" (UniqueName: \"kubernetes.io/projected/a093a74f-f4f8-43ff-9f16-232761ad58e0-kube-api-access-574tq\") pod \"multus-admission-controller-857f4d67dd-w7mdv\" (UID: \"a093a74f-f4f8-43ff-9f16-232761ad58e0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-w7mdv" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.251998 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:30 crc kubenswrapper[4796]: E1205 10:29:30.252333 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:30.752320196 +0000 UTC m=+117.040425710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.265334 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84wzp\" (UniqueName: \"kubernetes.io/projected/de6d4145-d6ed-46d6-980f-07305bfd7933-kube-api-access-84wzp\") pod \"migrator-59844c95c7-l6k4p\" (UID: \"de6d4145-d6ed-46d6-980f-07305bfd7933\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l6k4p" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.286591 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2f4ks"] Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.290306 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg2rc\" (UniqueName: \"kubernetes.io/projected/b57e8bb9-165e-44d6-8ca2-f14462dd7d37-kube-api-access-bg2rc\") pod \"service-ca-operator-777779d784-lxxz8\" (UID: \"b57e8bb9-165e-44d6-8ca2-f14462dd7d37\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxxz8" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.327616 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-z2zft" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.341628 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gw4sz"] Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.344437 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wzr5n" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.350911 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l6k4p" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.352916 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:30 crc kubenswrapper[4796]: E1205 10:29:30.353155 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:30.853144168 +0000 UTC m=+117.141249680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.369329 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dktck" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.374556 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-w7mdv" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.379998 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r45ng" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.386886 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blx8z" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.406975 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6ftq2" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.415671 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2tpfm"] Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.417824 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-t4hjr"] Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.421057 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415495-lj8xl" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.427385 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzkjl"] Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.427583 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxxz8" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.443219 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" event={"ID":"20e67d3a-290d-4ac2-b3ef-69330f127f52","Type":"ContainerStarted","Data":"f907a9928c3bb130659d03d300fea88d32163608329000f55fdcb0c9439020e9"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.443881 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" event={"ID":"56729699-46b2-454c-83d5-9dce9d90ac49","Type":"ContainerStarted","Data":"716acb3d87f0fa423b63c8e082d22df1adea00e3b123dc1ff91eec8963682a25"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.450033 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5b8dg"] Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.451093 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-n87bt" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.451934 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6v7lm" event={"ID":"5886f6e5-57aa-4e8d-adb5-a0cb9f1648db","Type":"ContainerStarted","Data":"7b9a100a118ac4aedc50bc5325e54483b143a7009d24e53b7f3a742cac27c8ed"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.465149 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5k24" event={"ID":"38ff4075-9f86-4457-af3b-a1db2ea74bf7","Type":"ContainerStarted","Data":"f95c2cd30c42b4d56059bf400ca3f24d010b2ab522bef737c605a42651d3418e"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.465749 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:30 crc kubenswrapper[4796]: E1205 10:29:30.466261 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:30.966244368 +0000 UTC m=+117.254349881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.478154 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pzjgw" event={"ID":"3dddee8f-f24e-4777-ba27-418ef5ef30c5","Type":"ContainerStarted","Data":"e09af963d17abbfa3f737e56cb571a27b373e48735594f4650b5671751d2ccfa"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.484980 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-54wv6" event={"ID":"82c1511d-7bf8-4245-9593-f2de287b9dd6","Type":"ContainerStarted","Data":"b9de76a40d9a366f15b221c11aaceb2564005f9983b1f8798cf0a0243b47e985"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.485005 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-54wv6" event={"ID":"82c1511d-7bf8-4245-9593-f2de287b9dd6","Type":"ContainerStarted","Data":"e3be5628a727f9aa96f8e0457c168f3f60e6f060f4fe8ec42d3b90e64e338c2a"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.485015 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-54wv6" event={"ID":"82c1511d-7bf8-4245-9593-f2de287b9dd6","Type":"ContainerStarted","Data":"0c8a25c2942350097d1ba35d18fe239b026e0cca1e7099973d507553e991e6d9"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.488502 4796 generic.go:334] "Generic (PLEG): container finished" podID="72c37509-1c20-49e9-9a41-49a79f90a2e2" containerID="9863ca8db540aac7ce70424fea3bfa884116110efd963c7a26b8c13530abe7b7" exitCode=0 Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.488712 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42nbd" event={"ID":"72c37509-1c20-49e9-9a41-49a79f90a2e2","Type":"ContainerDied","Data":"9863ca8db540aac7ce70424fea3bfa884116110efd963c7a26b8c13530abe7b7"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.488737 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42nbd" event={"ID":"72c37509-1c20-49e9-9a41-49a79f90a2e2","Type":"ContainerStarted","Data":"8a15fca5613439735e256fa105249baa26724745ffd98b4f3fd30780e5d8a37f"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.492119 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n42q4" event={"ID":"1aadfdc4-1cb5-432f-ab33-81df64ceb763","Type":"ContainerStarted","Data":"5f82ea874d59e85cb6635ef6b7f002173f8f1a220373b47234eb42be4b4606b5"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.500389 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9fpsf"] Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.505100 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwbh6" event={"ID":"fd8fbf42-d1ee-4ca8-bb56-491c068bbf4a","Type":"ContainerStarted","Data":"5b235b4b3243bb5231443a195935b886bf13105d3b5dae8dfb5787a8e9a32980"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.507976 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-chjp6" event={"ID":"f7b44e76-07c3-4876-965a-d7fe4c3c8e41","Type":"ContainerStarted","Data":"46359d3498e1a55a4f5c2c674b50b275c7a493cd5f9efebe0d05efbe78fa84e8"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.508000 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-chjp6" event={"ID":"f7b44e76-07c3-4876-965a-d7fe4c3c8e41","Type":"ContainerStarted","Data":"78f5aa21add6359512b2ef03b1edf5d88922354b771e0ef8464943d283f5dc29"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.509558 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zsscn" event={"ID":"8c481089-1d82-466f-b015-541a729f07b7","Type":"ContainerStarted","Data":"419262cdd5fd4044085ab463c0dc1498be580e503c36e8c2b5f221bdcb4bfd7a"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.520719 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v4phj" event={"ID":"93a93c24-45ae-43e1-9349-7471c6a218f8","Type":"ContainerStarted","Data":"8cc573bd576ba6cd78bf76d400c725e232386cdea0b8ed208c821ff943660b2e"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.530302 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jn5vx" event={"ID":"ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da","Type":"ContainerStarted","Data":"f7e466e80ac920afd08dc3cbc9d207d3b2252efadb51157903885dfff71bc6b8"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.532412 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gmlb4"] Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.538993 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nxq92" event={"ID":"dfe92dbc-47df-44cc-bd99-123052631ff4","Type":"ContainerStarted","Data":"0bf9a65d82d984e44e3dec1bb888af54702bc2ea40729733c2bf8f59ba1e1089"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.543556 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8hb86" event={"ID":"1b50e1f6-6ac8-4672-a14e-6fe4a6d3b4be","Type":"ContainerStarted","Data":"fef862abe166d3f46a9e755375c69946f38106f27703dcf9953b9e6576fefa7b"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.543635 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8hb86" event={"ID":"1b50e1f6-6ac8-4672-a14e-6fe4a6d3b4be","Type":"ContainerStarted","Data":"c640ac90d1c546f0d12291c8c6ea23346a5f4aac1e34fccdc22f3f9a7b574078"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.547016 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2f4ks" event={"ID":"4b8469b6-c8dd-4b39-81cb-ab35657cdfb5","Type":"ContainerStarted","Data":"1fd3462e2fe9ae2ea921b8551c5b199657a467836d635f37f59a3271637b1d99"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.555410 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxz98" event={"ID":"c282a9e9-3ef7-41fa-8df8-89ae6aa7f592","Type":"ContainerStarted","Data":"663c456e9a682ed01d5b49fd18a32dc58af382438704cddc30153ead56f55fec"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.561747 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gw4sz" event={"ID":"7f15aff4-f714-4cc5-a551-36653ade84e4","Type":"ContainerStarted","Data":"421765e67c94745a91e5d079bc8e3ece87423a3aa4f69b3d4e19de80fa546ca8"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.574288 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:30 crc kubenswrapper[4796]: E1205 10:29:30.574898 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:31.074884317 +0000 UTC m=+117.362989831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.599621 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" event={"ID":"a0677773-5515-4b74-9975-75dc72e6a127","Type":"ContainerStarted","Data":"37c1890208b4f8639a3faf14af8bf9444bfdb383a0ff934b221a03db9f5e4cb4"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.599645 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" event={"ID":"a0677773-5515-4b74-9975-75dc72e6a127","Type":"ContainerStarted","Data":"df40992b222d9822a1a4e4ba6f49890a8f6990c9c859cdeeddfa7bce2f1b0908"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.599662 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.603082 4796 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-j7krr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.603136 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" podUID="a0677773-5515-4b74-9975-75dc72e6a127" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.633233 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jfp56" event={"ID":"dc21f1de-fae3-4d9a-862e-e33257b4f177","Type":"ContainerStarted","Data":"de7979daa259eabf5d895ae5888e81989398da4f17c34cf0a033ce6fc54c25e3"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.633273 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-jfp56" event={"ID":"dc21f1de-fae3-4d9a-862e-e33257b4f177","Type":"ContainerStarted","Data":"183d137c8def7ddac6d8638991d2548f21e5b92030492e87842b75e3c3423cf3"} Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.633572 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-jfp56" Dec 05 10:29:30 crc kubenswrapper[4796]: W1205 10:29:30.634310 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod356c2f0d_a078_42e4_925a_e4f39864eb48.slice/crio-e4b1f48ef73739019deec84a57d71420d34209f91484a8e06dbc585c29aead36 WatchSource:0}: Error finding container e4b1f48ef73739019deec84a57d71420d34209f91484a8e06dbc585c29aead36: Status 404 returned error can't find the container with id e4b1f48ef73739019deec84a57d71420d34209f91484a8e06dbc585c29aead36 Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.636270 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-jfp56 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.636302 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jfp56" podUID="dc21f1de-fae3-4d9a-862e-e33257b4f177" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.674452 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ngdm4"] Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.675261 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:30 crc kubenswrapper[4796]: E1205 10:29:30.676990 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:31.176971872 +0000 UTC m=+117.465077386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.723217 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hktfd"] Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.737432 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh"] Dec 05 10:29:30 crc kubenswrapper[4796]: W1205 10:29:30.770550 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb627bdb1_a428_4272_b9b4_2732a8cb4b2c.slice/crio-e28e286b8ed98f97c88e3804246ff02d24cd2324d5c9dd22005882d1b075e363 WatchSource:0}: Error finding container e28e286b8ed98f97c88e3804246ff02d24cd2324d5c9dd22005882d1b075e363: Status 404 returned error can't find the container with id e28e286b8ed98f97c88e3804246ff02d24cd2324d5c9dd22005882d1b075e363 Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.777115 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:30 crc kubenswrapper[4796]: E1205 10:29:30.777436 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:31.277426007 +0000 UTC m=+117.565531520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:30 crc kubenswrapper[4796]: W1205 10:29:30.783885 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc970b9fd_5a2c_43db_a77c_078bcfbe3f6f.slice/crio-8b600d682a70ab8ca556ac1f906874207615743d4baae823a3eae0080ad70c7b WatchSource:0}: Error finding container 8b600d682a70ab8ca556ac1f906874207615743d4baae823a3eae0080ad70c7b: Status 404 returned error can't find the container with id 8b600d682a70ab8ca556ac1f906874207615743d4baae823a3eae0080ad70c7b Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.882328 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:30 crc kubenswrapper[4796]: E1205 10:29:30.882925 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:31.382912028 +0000 UTC m=+117.671017542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:30 crc kubenswrapper[4796]: I1205 10:29:30.988545 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:30 crc kubenswrapper[4796]: E1205 10:29:30.989161 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:31.48914751 +0000 UTC m=+117.777253023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.089866 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:31 crc kubenswrapper[4796]: E1205 10:29:31.090026 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:31.58999314 +0000 UTC m=+117.878098653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.090099 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:31 crc kubenswrapper[4796]: E1205 10:29:31.090357 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:31.59034956 +0000 UTC m=+117.878455073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.164701 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-l6k4p"] Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.191216 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:31 crc kubenswrapper[4796]: E1205 10:29:31.191647 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:31.691632553 +0000 UTC m=+117.979738066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.216903 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6ftq2"] Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.292983 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:31 crc kubenswrapper[4796]: E1205 10:29:31.293218 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:31.793208566 +0000 UTC m=+118.081314079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.379764 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" podStartSLOduration=96.379748422 podStartE2EDuration="1m36.379748422s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:31.340272896 +0000 UTC m=+117.628378408" watchObservedRunningTime="2025-12-05 10:29:31.379748422 +0000 UTC m=+117.667853935" Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.398722 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:31 crc kubenswrapper[4796]: E1205 10:29:31.399370 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:31.899351222 +0000 UTC m=+118.187456735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.418185 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rmk7f" podStartSLOduration=96.418167183 podStartE2EDuration="1m36.418167183s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:31.413125689 +0000 UTC m=+117.701231202" watchObservedRunningTime="2025-12-05 10:29:31.418167183 +0000 UTC m=+117.706272696" Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.462804 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z2zft"] Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.463711 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415495-lj8xl"] Dec 05 10:29:31 crc kubenswrapper[4796]: W1205 10:29:31.489869 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9ebd86c_0c38_4954_8c59_a4e0168fb2d5.slice/crio-0ed2a86acb7cfd7d33835bbceddc5c1f3460a93c0a8ed306df86ebf7d73e8b02 WatchSource:0}: Error finding container 0ed2a86acb7cfd7d33835bbceddc5c1f3460a93c0a8ed306df86ebf7d73e8b02: Status 404 returned error can't find the container with id 0ed2a86acb7cfd7d33835bbceddc5c1f3460a93c0a8ed306df86ebf7d73e8b02 Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.500844 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:31 crc kubenswrapper[4796]: E1205 10:29:31.501142 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:32.001130537 +0000 UTC m=+118.289236050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.575738 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wzr5n"] Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.576491 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r45ng"] Dec 05 10:29:31 crc kubenswrapper[4796]: W1205 10:29:31.595056 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77837609_281b_417a_8398_7732463eb92a.slice/crio-d666dcde41381f90426890e75d3c72ac598849718fb8735c05a29f264f883198 WatchSource:0}: Error finding container d666dcde41381f90426890e75d3c72ac598849718fb8735c05a29f264f883198: Status 404 returned error can't find the container with id d666dcde41381f90426890e75d3c72ac598849718fb8735c05a29f264f883198 Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.603393 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:31 crc kubenswrapper[4796]: E1205 10:29:31.603724 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:32.103711199 +0000 UTC m=+118.391816713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.663626 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hktfd" event={"ID":"b627bdb1-a428-4272-b9b4-2732a8cb4b2c","Type":"ContainerStarted","Data":"e28e286b8ed98f97c88e3804246ff02d24cd2324d5c9dd22005882d1b075e363"} Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.673028 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6v7lm" event={"ID":"5886f6e5-57aa-4e8d-adb5-a0cb9f1648db","Type":"ContainerStarted","Data":"79c20cf2ae2a7104f23e1a0bf2aceef1d9d736c7ff1ad8a4c36a59dc36bdbb5a"} Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.673848 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6v7lm" Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.676703 4796 patch_prober.go:28] interesting pod/console-operator-58897d9998-6v7lm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.676737 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6v7lm" podUID="5886f6e5-57aa-4e8d-adb5-a0cb9f1648db" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.704662 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:31 crc kubenswrapper[4796]: E1205 10:29:31.705909 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:32.205896187 +0000 UTC m=+118.494001700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.716254 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l6k4p" event={"ID":"de6d4145-d6ed-46d6-980f-07305bfd7933","Type":"ContainerStarted","Data":"4663f982bb5f21ab2a6ed6c55c2d02e0b815eebc1962f4e177553904b0c438af"} Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.761262 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6ftq2" event={"ID":"ed98b6c8-708c-4282-b1fe-4064fd60f446","Type":"ContainerStarted","Data":"7900531e84141ade81104f53d10fa8b533bdbab586b062e55a9b71d649157edc"} Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.808364 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-jfp56" podStartSLOduration=96.808346173 podStartE2EDuration="1m36.808346173s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:31.807384405 +0000 UTC m=+118.095489918" watchObservedRunningTime="2025-12-05 10:29:31.808346173 +0000 UTC m=+118.096451686" Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.809127 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2f4ks" event={"ID":"4b8469b6-c8dd-4b39-81cb-ab35657cdfb5","Type":"ContainerStarted","Data":"d3450e4a7096c39f3eab91ecf460affdd2f943880a27777638b208824292086e"} Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.811383 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:31 crc kubenswrapper[4796]: E1205 10:29:31.811627 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:32.311607753 +0000 UTC m=+118.599713266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.816035 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwbh6" event={"ID":"fd8fbf42-d1ee-4ca8-bb56-491c068bbf4a","Type":"ContainerStarted","Data":"964c2edf15efe94e6a3bbedaa73b616203c84f80503aff77aa1e389fa206175d"} Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.836920 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pzjgw" event={"ID":"3dddee8f-f24e-4777-ba27-418ef5ef30c5","Type":"ContainerStarted","Data":"ae76fa30ac19537123ca5caa51058cf40f886e8ad8aae63dc3e71ae4fdb75fc2"} Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.839705 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blx8z"] Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.859917 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zsscn" event={"ID":"8c481089-1d82-466f-b015-541a729f07b7","Type":"ContainerStarted","Data":"60408cd09d1311544a443fc456be2b8e5ce58585d8771181ebd11c6d01a09288"} Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.870145 4796 generic.go:334] "Generic (PLEG): container finished" podID="93a93c24-45ae-43e1-9349-7471c6a218f8" containerID="823afbbd8d9e8c1ef70f12338f70c20d89b9f4efd0870dca9f9db1cdbb38764d" exitCode=0 Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.870337 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v4phj" event={"ID":"93a93c24-45ae-43e1-9349-7471c6a218f8","Type":"ContainerDied","Data":"823afbbd8d9e8c1ef70f12338f70c20d89b9f4efd0870dca9f9db1cdbb38764d"} Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.874705 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z2zft" event={"ID":"77837609-281b-417a-8398-7732463eb92a","Type":"ContainerStarted","Data":"d666dcde41381f90426890e75d3c72ac598849718fb8735c05a29f264f883198"} Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.877655 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dktck"] Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.895878 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nxq92" event={"ID":"dfe92dbc-47df-44cc-bd99-123052631ff4","Type":"ContainerStarted","Data":"0c971cc2db1d04ba96307e506ff8eccf9b02acf3c20443202bd2cef3a9e112cf"} Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.908278 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" event={"ID":"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f","Type":"ContainerStarted","Data":"8b600d682a70ab8ca556ac1f906874207615743d4baae823a3eae0080ad70c7b"} Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.910118 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-w7mdv"] Dec 05 10:29:31 crc kubenswrapper[4796]: W1205 10:29:31.913922 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae18ea6b_e8c4_46e8_b9df_49944b44a999.slice/crio-6c8e1b9755190ef43d8307d6d6977d47c9ce29d9c54587aa6cf2a77ad2545420 WatchSource:0}: Error finding container 6c8e1b9755190ef43d8307d6d6977d47c9ce29d9c54587aa6cf2a77ad2545420: Status 404 returned error can't find the container with id 6c8e1b9755190ef43d8307d6d6977d47c9ce29d9c54587aa6cf2a77ad2545420 Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.915735 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-n87bt"] Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.917821 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wzr5n" event={"ID":"ac76ab3e-e33f-4a64-8f67-b65675369db5","Type":"ContainerStarted","Data":"69acba8a7b30e502b6b1fd85b0dae4266d1318ef9bbb1ce0b5058480713bd7c1"} Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.919456 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:31 crc kubenswrapper[4796]: E1205 10:29:31.921390 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:32.421377595 +0000 UTC m=+118.709483108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.936054 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-54wv6" podStartSLOduration=96.93603989 podStartE2EDuration="1m36.93603989s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:31.93448605 +0000 UTC m=+118.222591562" watchObservedRunningTime="2025-12-05 10:29:31.93603989 +0000 UTC m=+118.224145403" Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.936319 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5k24" event={"ID":"38ff4075-9f86-4457-af3b-a1db2ea74bf7","Type":"ContainerStarted","Data":"1c79f1cc9ca350e3cd9c9901b4b4062d2baa89a66d795a09c6f831a0180ddfcf"} Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.949507 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2mpns" event={"ID":"db1f92cf-f3a0-40a5-91d8-8fbe70b15948","Type":"ContainerStarted","Data":"a15e4f403b72a3481bc3ba84f796b22f29d6d683925bae0d82de74c63493e734"} Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.949539 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2mpns" event={"ID":"db1f92cf-f3a0-40a5-91d8-8fbe70b15948","Type":"ContainerStarted","Data":"c5665ea696484ccc24b0ccc3cead1c47bfba5707301674ab69a54de762220baa"} Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.959930 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lxxz8"] Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.962867 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gmlb4" event={"ID":"356c2f0d-a078-42e4-925a-e4f39864eb48","Type":"ContainerStarted","Data":"e4b1f48ef73739019deec84a57d71420d34209f91484a8e06dbc585c29aead36"} Dec 05 10:29:31 crc kubenswrapper[4796]: I1205 10:29:31.993604 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-zsscn" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.003047 4796 patch_prober.go:28] interesting pod/router-default-5444994796-zsscn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 10:29:32 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 05 10:29:32 crc kubenswrapper[4796]: [+]process-running ok Dec 05 10:29:32 crc kubenswrapper[4796]: healthz check failed Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.003225 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zsscn" podUID="8c481089-1d82-466f-b015-541a729f07b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.012088 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-chjp6" podStartSLOduration=97.012075124 podStartE2EDuration="1m37.012075124s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:32.006928152 +0000 UTC m=+118.295033665" watchObservedRunningTime="2025-12-05 10:29:32.012075124 +0000 UTC m=+118.300180637" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.022093 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:32 crc kubenswrapper[4796]: E1205 10:29:32.023088 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:32.523074235 +0000 UTC m=+118.811179748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.058943 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jn5vx" event={"ID":"ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da","Type":"ContainerStarted","Data":"27a58365f15e5aca24d2add3662943274153d602ca5227016358ebb83a508463"} Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.095871 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n42q4" event={"ID":"1aadfdc4-1cb5-432f-ab33-81df64ceb763","Type":"ContainerStarted","Data":"35f8bd97f20636f7ede261954323d0848db30d1db8b6f38358ff3dd3d7f10405"} Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.125169 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:32 crc kubenswrapper[4796]: E1205 10:29:32.126140 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:32.626129369 +0000 UTC m=+118.914234881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.140949 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-pzjgw" podStartSLOduration=97.140932028 podStartE2EDuration="1m37.140932028s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:32.140425986 +0000 UTC m=+118.428531500" watchObservedRunningTime="2025-12-05 10:29:32.140932028 +0000 UTC m=+118.429037541" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.146851 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415495-lj8xl" event={"ID":"f9ebd86c-0c38-4954-8c59-a4e0168fb2d5","Type":"ContainerStarted","Data":"0ed2a86acb7cfd7d33835bbceddc5c1f3460a93c0a8ed306df86ebf7d73e8b02"} Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.161833 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ngdm4" event={"ID":"1a842f36-0393-4416-b05b-9caa7959e8d8","Type":"ContainerStarted","Data":"fdd7a286d4036641d349659e448053db27ea05e96689890ee87d47a1d9ff5e0d"} Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.162615 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ngdm4" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.173612 4796 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-ngdm4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.173654 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ngdm4" podUID="1a842f36-0393-4416-b05b-9caa7959e8d8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.199095 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-zsscn" podStartSLOduration=97.199070588 podStartE2EDuration="1m37.199070588s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:32.189304184 +0000 UTC m=+118.477409727" watchObservedRunningTime="2025-12-05 10:29:32.199070588 +0000 UTC m=+118.487176101" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.206541 4796 generic.go:334] "Generic (PLEG): container finished" podID="20e67d3a-290d-4ac2-b3ef-69330f127f52" containerID="ef4993e70b83cf9592cd5e39689e33ab465371da0b7c5c7f2bc028d883ae2b7f" exitCode=0 Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.206808 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" event={"ID":"20e67d3a-290d-4ac2-b3ef-69330f127f52","Type":"ContainerDied","Data":"ef4993e70b83cf9592cd5e39689e33ab465371da0b7c5c7f2bc028d883ae2b7f"} Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.222495 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" event={"ID":"56729699-46b2-454c-83d5-9dce9d90ac49","Type":"ContainerStarted","Data":"b7e7480081aeba60d1481e5174a463e334b0201e49f5d6061b9caae14514da6e"} Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.223125 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.225975 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:32 crc kubenswrapper[4796]: E1205 10:29:32.227107 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:32.727090677 +0000 UTC m=+119.015196190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.262329 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.272502 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxz98" event={"ID":"c282a9e9-3ef7-41fa-8df8-89ae6aa7f592","Type":"ContainerStarted","Data":"151e2651f5c6b31bc59e02a1ec393dc31247145c123544c870f9f7f7e454b05d"} Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.311149 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n42q4" podStartSLOduration=97.311124734 podStartE2EDuration="1m37.311124734s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:32.255798037 +0000 UTC m=+118.543903550" watchObservedRunningTime="2025-12-05 10:29:32.311124734 +0000 UTC m=+118.599230247" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.312816 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2tpfm" event={"ID":"e579b64c-823d-4798-81c7-52c515a4f9f1","Type":"ContainerStarted","Data":"be6939f39ab8b1a6019fbfdd832c3d23bb8b28216b8d830f42b472f957cdb805"} Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.313299 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jn5vx" podStartSLOduration=97.31328876 podStartE2EDuration="1m37.31328876s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:32.311707829 +0000 UTC m=+118.599813342" watchObservedRunningTime="2025-12-05 10:29:32.31328876 +0000 UTC m=+118.601394274" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.329973 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:32 crc kubenswrapper[4796]: E1205 10:29:32.332128 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:32.832113919 +0000 UTC m=+119.120219432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.342886 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t4hjr" event={"ID":"8e207003-e5a7-4cd4-a6e4-748fd8ece44b","Type":"ContainerStarted","Data":"b9acad5ce5630fcb8b41f0119683ba36d9403f3efee1cd80b643f5b4ed412598"} Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.342931 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t4hjr" event={"ID":"8e207003-e5a7-4cd4-a6e4-748fd8ece44b","Type":"ContainerStarted","Data":"2730354f504be8b4296aad98febd53bf2eb67507de0ddd2ec405ab0bca4d92c9"} Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.371742 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9fpsf" event={"ID":"010954d4-08ed-47a9-a504-8149788b8b65","Type":"ContainerStarted","Data":"c06b6055486923d6259b9ff5714160ebc4c43ee6e39bd3835e1a2d49488f4cef"} Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.433525 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:32 crc kubenswrapper[4796]: E1205 10:29:32.434603 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:32.934571951 +0000 UTC m=+119.222677463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.442701 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5b8dg" event={"ID":"3fec5c26-b7fd-4f6f-aa94-503312e81cdb","Type":"ContainerStarted","Data":"36c30a11de7f364775fb7781c1250fa0c39f075ac80dbb3a3fd980147d5f657b"} Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.444310 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-2f4ks" podStartSLOduration=97.44429302 podStartE2EDuration="1m37.44429302s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:32.414033531 +0000 UTC m=+118.702139045" watchObservedRunningTime="2025-12-05 10:29:32.44429302 +0000 UTC m=+118.732398532" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.445606 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5b8dg" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.446006 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-6v7lm" podStartSLOduration=97.445995979 podStartE2EDuration="1m37.445995979s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:32.445155761 +0000 UTC m=+118.733261273" watchObservedRunningTime="2025-12-05 10:29:32.445995979 +0000 UTC m=+118.734101492" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.451230 4796 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5b8dg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.451303 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5b8dg" podUID="3fec5c26-b7fd-4f6f-aa94-503312e81cdb" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.465896 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzkjl" event={"ID":"407028a3-3d79-420a-9dcf-751bb138e022","Type":"ContainerStarted","Data":"7544c9d95f156291dbf069edd6861c539fdd5ca0ee50894268c0223e169ccb22"} Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.523125 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8hb86" event={"ID":"1b50e1f6-6ac8-4672-a14e-6fe4a6d3b4be","Type":"ContainerStarted","Data":"0318d04badf11f2ba99d622e39d73a64469b2ea833d6eb2654d3722f32ff35a9"} Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.523363 4796 patch_prober.go:28] interesting pod/downloads-7954f5f757-jfp56 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.523484 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-jfp56" podUID="dc21f1de-fae3-4d9a-862e-e33257b4f177" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.536756 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.537676 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" Dec 05 10:29:32 crc kubenswrapper[4796]: E1205 10:29:32.538080 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:33.038064264 +0000 UTC m=+119.326169778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.557387 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nxq92" podStartSLOduration=97.557371059 podStartE2EDuration="1m37.557371059s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:32.489866726 +0000 UTC m=+118.777972239" watchObservedRunningTime="2025-12-05 10:29:32.557371059 +0000 UTC m=+118.845476572" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.558269 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwbh6" podStartSLOduration=97.558264087 podStartE2EDuration="1m37.558264087s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:32.552110783 +0000 UTC m=+118.840216306" watchObservedRunningTime="2025-12-05 10:29:32.558264087 +0000 UTC m=+118.846369599" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.610394 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2mpns" podStartSLOduration=5.6103731759999995 podStartE2EDuration="5.610373176s" podCreationTimestamp="2025-12-05 10:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:32.577418844 +0000 UTC m=+118.865524357" watchObservedRunningTime="2025-12-05 10:29:32.610373176 +0000 UTC m=+118.898478689" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.638238 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:32 crc kubenswrapper[4796]: E1205 10:29:32.639223 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:33.139211462 +0000 UTC m=+119.427316976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.650824 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l5k24" podStartSLOduration=97.650808116 podStartE2EDuration="1m37.650808116s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:32.612917538 +0000 UTC m=+118.901023051" watchObservedRunningTime="2025-12-05 10:29:32.650808116 +0000 UTC m=+118.938913629" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.688469 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ngdm4" podStartSLOduration=97.688443986 podStartE2EDuration="1m37.688443986s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:32.68774392 +0000 UTC m=+118.975849433" watchObservedRunningTime="2025-12-05 10:29:32.688443986 +0000 UTC m=+118.976549499" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.728068 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2tpfm" podStartSLOduration=97.728052122 podStartE2EDuration="1m37.728052122s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:32.726442888 +0000 UTC m=+119.014548401" watchObservedRunningTime="2025-12-05 10:29:32.728052122 +0000 UTC m=+119.016157635" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.739865 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:32 crc kubenswrapper[4796]: E1205 10:29:32.740351 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:33.240341027 +0000 UTC m=+119.528446540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.755132 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-t4hjr" podStartSLOduration=97.755120332 podStartE2EDuration="1m37.755120332s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:32.754922299 +0000 UTC m=+119.043027822" watchObservedRunningTime="2025-12-05 10:29:32.755120332 +0000 UTC m=+119.043225845" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.792055 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5b8dg" podStartSLOduration=97.792041488 podStartE2EDuration="1m37.792041488s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:32.790015531 +0000 UTC m=+119.078121054" watchObservedRunningTime="2025-12-05 10:29:32.792041488 +0000 UTC m=+119.080147002" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.841645 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:32 crc kubenswrapper[4796]: E1205 10:29:32.842126 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:33.342106567 +0000 UTC m=+119.630212080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.902308 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-8hb86" podStartSLOduration=97.902287765 podStartE2EDuration="1m37.902287765s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:32.835523014 +0000 UTC m=+119.123628517" watchObservedRunningTime="2025-12-05 10:29:32.902287765 +0000 UTC m=+119.190393279" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.902615 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzkjl" podStartSLOduration=97.902611275 podStartE2EDuration="1m37.902611275s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:32.901875492 +0000 UTC m=+119.189980994" watchObservedRunningTime="2025-12-05 10:29:32.902611275 +0000 UTC m=+119.190716788" Dec 05 10:29:32 crc kubenswrapper[4796]: I1205 10:29:32.945859 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:32 crc kubenswrapper[4796]: E1205 10:29:32.946448 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:33.446423591 +0000 UTC m=+119.734529105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.020719 4796 patch_prober.go:28] interesting pod/router-default-5444994796-zsscn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 10:29:33 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 05 10:29:33 crc kubenswrapper[4796]: [+]process-running ok Dec 05 10:29:33 crc kubenswrapper[4796]: healthz check failed Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.020778 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zsscn" podUID="8c481089-1d82-466f-b015-541a729f07b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.047326 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:33 crc kubenswrapper[4796]: E1205 10:29:33.047726 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:33.547713747 +0000 UTC m=+119.835819261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.099503 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" podStartSLOduration=98.099488449 podStartE2EDuration="1m38.099488449s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:33.097881549 +0000 UTC m=+119.385987062" watchObservedRunningTime="2025-12-05 10:29:33.099488449 +0000 UTC m=+119.387593962" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.136036 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xxz98" podStartSLOduration=98.136021135 podStartE2EDuration="1m38.136021135s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:33.134926739 +0000 UTC m=+119.423032252" watchObservedRunningTime="2025-12-05 10:29:33.136021135 +0000 UTC m=+119.424126649" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.153609 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:33 crc kubenswrapper[4796]: E1205 10:29:33.153891 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:33.653878445 +0000 UTC m=+119.941983958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.255027 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:33 crc kubenswrapper[4796]: E1205 10:29:33.255291 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:33.755277646 +0000 UTC m=+120.043383160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.255327 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:33 crc kubenswrapper[4796]: E1205 10:29:33.255592 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:33.755585925 +0000 UTC m=+120.043691438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.356348 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:33 crc kubenswrapper[4796]: E1205 10:29:33.356521 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:33.85649712 +0000 UTC m=+120.144602633 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.356666 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:33 crc kubenswrapper[4796]: E1205 10:29:33.356930 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:33.856920074 +0000 UTC m=+120.145025587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.457323 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:33 crc kubenswrapper[4796]: E1205 10:29:33.457972 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:33.957958287 +0000 UTC m=+120.246063800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.529248 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-gzkjl" event={"ID":"407028a3-3d79-420a-9dcf-751bb138e022","Type":"ContainerStarted","Data":"7800b478d5283af81cd6a14f9467daf3ef789a7a847ec40483c9b71a653f8e07"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.532207 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ngdm4" event={"ID":"1a842f36-0393-4416-b05b-9caa7959e8d8","Type":"ContainerStarted","Data":"09f3b9255033d40f79b642ee461e53a73d2ac435fbe0dbd57684ce14fa7af1ad"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.534325 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gw4sz" event={"ID":"7f15aff4-f714-4cc5-a551-36653ade84e4","Type":"ContainerStarted","Data":"458e0567dd0e1dc3f220fc992597302874b21a29ea6643bc0a07cc2cdfa896d0"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.534349 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gw4sz" event={"ID":"7f15aff4-f714-4cc5-a551-36653ade84e4","Type":"ContainerStarted","Data":"58f9f5cbdb670db371d464632afd8e5ab7a700136974d538bfc44df182f62d4d"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.536053 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r45ng" event={"ID":"a5c7cd05-933f-4424-92c8-5a7023694cc1","Type":"ContainerStarted","Data":"f6ccfc7a6e98e050ceab99e0437eaf91012f085562d4f2adac06f5d6dfe3b72a"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.536076 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r45ng" event={"ID":"a5c7cd05-933f-4424-92c8-5a7023694cc1","Type":"ContainerStarted","Data":"207bf0ff0cddb43b4bbbd13e99784fae3a5c2a79c9d38db5b25820e0169d17ed"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.536551 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r45ng" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.548257 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" event={"ID":"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f","Type":"ContainerStarted","Data":"e029260f600ee56f8c15414d018797bee09922ec75b8c4db5db6c47feed37497"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.548861 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.552221 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-w7mdv" event={"ID":"a093a74f-f4f8-43ff-9f16-232761ad58e0","Type":"ContainerStarted","Data":"032004eda461fa42a6ab317d851becba54907b2f019031048f86cfe034479a90"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.552249 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-w7mdv" event={"ID":"a093a74f-f4f8-43ff-9f16-232761ad58e0","Type":"ContainerStarted","Data":"cb852d3b9e6f4eae3306f38dcc261d67ab3d1c0fb767a04d62117908414036cb"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.559736 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:33 crc kubenswrapper[4796]: E1205 10:29:33.560138 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:34.06012386 +0000 UTC m=+120.348229372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.560640 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415495-lj8xl" event={"ID":"f9ebd86c-0c38-4954-8c59-a4e0168fb2d5","Type":"ContainerStarted","Data":"75be366635200236d7249af0c8be43ab5f0ec4777402f2f461c2e94a8a1b0278"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.564593 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.566877 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z2zft" event={"ID":"77837609-281b-417a-8398-7732463eb92a","Type":"ContainerStarted","Data":"b60ee58f2a267d7381a1035fc67922072cf13c2c337909404ad6d40583c3e66b"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.566912 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z2zft" event={"ID":"77837609-281b-417a-8398-7732463eb92a","Type":"ContainerStarted","Data":"5dd15c943f36400c048b7be3486b7b21f66a9e13ae836a31a1b7b36a60967c06"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.567494 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r45ng" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.569546 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wzr5n" event={"ID":"ac76ab3e-e33f-4a64-8f67-b65675369db5","Type":"ContainerStarted","Data":"dc7c7300175167fb185a72a41f7f2fec523bf722a5291394ab5ddca7c30c50ea"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.576198 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42nbd" event={"ID":"72c37509-1c20-49e9-9a41-49a79f90a2e2","Type":"ContainerStarted","Data":"fe8e9a6356fecd2026eafd17d66be91282d8363df376bf05824f3e8fb7e51519"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.576576 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42nbd" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.581021 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-jn5vx" event={"ID":"ed5a82b1-8f18-4b9a-9902-ae4c2c8a44da","Type":"ContainerStarted","Data":"09fe0d1306bbc14646326a0828a119bfca2e2ca48c6742b5bc98008d7f58ba75"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.584044 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l6k4p" event={"ID":"de6d4145-d6ed-46d6-980f-07305bfd7933","Type":"ContainerStarted","Data":"4f7493ef18bec68f7ef2c6eef04ebe94ce3e5fe0af970dab3faec526cbc28493"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.584079 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l6k4p" event={"ID":"de6d4145-d6ed-46d6-980f-07305bfd7933","Type":"ContainerStarted","Data":"a4bc662dce1c8b24cfa35216001e941cdb1f83cf416e1e9086fc0b0b68cf4f1b"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.587703 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v4phj" event={"ID":"93a93c24-45ae-43e1-9349-7471c6a218f8","Type":"ContainerStarted","Data":"8b78c815852177612e1f6ec991f7ad50b66522d6e093a455cf84e7594c1eda27"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.587727 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v4phj" event={"ID":"93a93c24-45ae-43e1-9349-7471c6a218f8","Type":"ContainerStarted","Data":"e6ef686654f1c1c3394a9c5f0ebb1456a8572192fc0f0e5af74c8a443dd893b9"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.591932 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blx8z" event={"ID":"ae18ea6b-e8c4-46e8-b9df-49944b44a999","Type":"ContainerStarted","Data":"bf3aa6986f7676b4b41810fe42226393e70f9b94db0eeb30adb4b31c8e165102"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.591957 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blx8z" event={"ID":"ae18ea6b-e8c4-46e8-b9df-49944b44a999","Type":"ContainerStarted","Data":"d62358c3c97304964dfd09e9e10f82d5c2d36712636f3aef2bb3c80da6c7bef9"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.591967 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blx8z" event={"ID":"ae18ea6b-e8c4-46e8-b9df-49944b44a999","Type":"ContainerStarted","Data":"6c8e1b9755190ef43d8307d6d6977d47c9ce29d9c54587aa6cf2a77ad2545420"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.592286 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blx8z" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.593873 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hktfd" event={"ID":"b627bdb1-a428-4272-b9b4-2732a8cb4b2c","Type":"ContainerStarted","Data":"32c67a9a84deaa99c219904d350b157b360078796efdd12455a9b078f57ffacc"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.597067 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2tpfm" event={"ID":"e579b64c-823d-4798-81c7-52c515a4f9f1","Type":"ContainerStarted","Data":"a673ade4db41bb467e8aa78967ab3bfad871b984890fd53915a399a138f9f2bc"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.597092 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-2tpfm" event={"ID":"e579b64c-823d-4798-81c7-52c515a4f9f1","Type":"ContainerStarted","Data":"d7e1e7a2d239092c8a2b3a8465993728120e5cc7e3c25470e1264964d99df44c"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.599042 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9fpsf" event={"ID":"010954d4-08ed-47a9-a504-8149788b8b65","Type":"ContainerStarted","Data":"bd286ea7ac6157e2422f0271d723cd8686c2fe4cd5c847d23fa5a8ff1f908931"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.599066 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9fpsf" event={"ID":"010954d4-08ed-47a9-a504-8149788b8b65","Type":"ContainerStarted","Data":"df60f0a2a8e7cf562e94e0ce360805c83420b0829a20696964fa9c73107daaa6"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.599420 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-9fpsf" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.601640 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" event={"ID":"20e67d3a-290d-4ac2-b3ef-69330f127f52","Type":"ContainerStarted","Data":"5dc0b793bcc3f2c733c2400e2603eaa0c31263006ffbf5415cd10d36cf721065"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.606953 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5b8dg" event={"ID":"3fec5c26-b7fd-4f6f-aa94-503312e81cdb","Type":"ContainerStarted","Data":"d44b8b4fb2d0293dd53a581450397938b1d37d5f656b7cb46ac9b348aa2a9d53"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.608455 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxxz8" event={"ID":"b57e8bb9-165e-44d6-8ca2-f14462dd7d37","Type":"ContainerStarted","Data":"11bb6144313750404cd34928362c4ce5442d46af6770c9a5243dcb431239408c"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.608491 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxxz8" event={"ID":"b57e8bb9-165e-44d6-8ca2-f14462dd7d37","Type":"ContainerStarted","Data":"54b8e9422f16c4f1e37dcf1ee0e56e5dc9e4dff9062940093383acb4c08b90d8"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.609718 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n87bt" event={"ID":"9a043ce0-300f-49b2-9f8f-30497aa70426","Type":"ContainerStarted","Data":"2712ca363cd1ef19700ae76ffdadcd4b016ac6981b2ffb3f4f1f44f242042e86"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.609741 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n87bt" event={"ID":"9a043ce0-300f-49b2-9f8f-30497aa70426","Type":"ContainerStarted","Data":"6220441989d0345f87f13a4256a53488ad9196fe6de956b2b44d3b1055d774a8"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.613070 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dktck" event={"ID":"766637c3-bd89-4f55-950e-68c553a5c6a4","Type":"ContainerStarted","Data":"c444896ee6cdc24d5b04a6392edbddab60d6ac1d2874a423c1f6355a5c9b6518"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.613095 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dktck" event={"ID":"766637c3-bd89-4f55-950e-68c553a5c6a4","Type":"ContainerStarted","Data":"297a933b6925fcc0d010260d1d04cb4e5a740c6a7ef61c71496f1d711d9e068d"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.615371 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29415495-lj8xl" podStartSLOduration=98.615363001 podStartE2EDuration="1m38.615363001s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:33.613496623 +0000 UTC m=+119.901602136" watchObservedRunningTime="2025-12-05 10:29:33.615363001 +0000 UTC m=+119.903468514" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.616318 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gw4sz" podStartSLOduration=98.616313196 podStartE2EDuration="1m38.616313196s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:33.572515166 +0000 UTC m=+119.860620679" watchObservedRunningTime="2025-12-05 10:29:33.616313196 +0000 UTC m=+119.904418700" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.619808 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lwbh6" event={"ID":"fd8fbf42-d1ee-4ca8-bb56-491c068bbf4a","Type":"ContainerStarted","Data":"072d1242c1df216a75af5b30cc4814ae62ee460eb14b48b39fd9ab4c3712b25f"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.620356 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5b8dg" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.624207 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6ftq2" event={"ID":"ed98b6c8-708c-4282-b1fe-4064fd60f446","Type":"ContainerStarted","Data":"3aacfcf36b789dabd33a616a6c34f0e93684b2d433596336796900ad83b5f930"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.635669 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-w7mdv" podStartSLOduration=98.635662581 podStartE2EDuration="1m38.635662581s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:33.634248752 +0000 UTC m=+119.922354266" watchObservedRunningTime="2025-12-05 10:29:33.635662581 +0000 UTC m=+119.923768093" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.639987 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gmlb4" event={"ID":"356c2f0d-a078-42e4-925a-e4f39864eb48","Type":"ContainerStarted","Data":"0d6153913013fd54860ad502a25257bf3499bc1bb66acf947da50cb7f5bce740"} Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.640027 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gmlb4" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.659897 4796 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gmlb4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.659940 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gmlb4" podUID="356c2f0d-a078-42e4-925a-e4f39864eb48" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.660836 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:33 crc kubenswrapper[4796]: E1205 10:29:33.661001 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:34.160967977 +0000 UTC m=+120.449073490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.661252 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.661357 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-6v7lm" Dec 05 10:29:33 crc kubenswrapper[4796]: E1205 10:29:33.661895 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:34.161882486 +0000 UTC m=+120.449987999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.675920 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" podStartSLOduration=97.675816132 podStartE2EDuration="1m37.675816132s" podCreationTimestamp="2025-12-05 10:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:33.670549894 +0000 UTC m=+119.958655408" watchObservedRunningTime="2025-12-05 10:29:33.675816132 +0000 UTC m=+119.963921645" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.712956 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-r45ng" podStartSLOduration=98.712943916 podStartE2EDuration="1m38.712943916s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:33.707493443 +0000 UTC m=+119.995598956" watchObservedRunningTime="2025-12-05 10:29:33.712943916 +0000 UTC m=+120.001049429" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.763170 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:33 crc kubenswrapper[4796]: E1205 10:29:33.765000 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:34.264986992 +0000 UTC m=+120.553092505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.797921 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-z2zft" podStartSLOduration=98.797904524 podStartE2EDuration="1m38.797904524s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:33.749305701 +0000 UTC m=+120.037411214" watchObservedRunningTime="2025-12-05 10:29:33.797904524 +0000 UTC m=+120.086010036" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.798355 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wzr5n" podStartSLOduration=98.798347977 podStartE2EDuration="1m38.798347977s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:33.79737541 +0000 UTC m=+120.085480943" watchObservedRunningTime="2025-12-05 10:29:33.798347977 +0000 UTC m=+120.086453491" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.846232 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-v4phj" podStartSLOduration=98.846213612 podStartE2EDuration="1m38.846213612s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:33.845452311 +0000 UTC m=+120.133557825" watchObservedRunningTime="2025-12-05 10:29:33.846213612 +0000 UTC m=+120.134319126" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.868472 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:33 crc kubenswrapper[4796]: E1205 10:29:33.869033 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:34.369021646 +0000 UTC m=+120.657127159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.919042 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-l6k4p" podStartSLOduration=98.919025178 podStartE2EDuration="1m38.919025178s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:33.917976838 +0000 UTC m=+120.206082351" watchObservedRunningTime="2025-12-05 10:29:33.919025178 +0000 UTC m=+120.207130691" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.969147 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gmlb4" podStartSLOduration=98.969114692 podStartE2EDuration="1m38.969114692s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:33.963899732 +0000 UTC m=+120.252005246" watchObservedRunningTime="2025-12-05 10:29:33.969114692 +0000 UTC m=+120.257220205" Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.969860 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:33 crc kubenswrapper[4796]: E1205 10:29:33.970127 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:34.47010823 +0000 UTC m=+120.758213742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.970341 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:33 crc kubenswrapper[4796]: E1205 10:29:33.970745 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:34.470731631 +0000 UTC m=+120.758837144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:33 crc kubenswrapper[4796]: I1205 10:29:33.995375 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-6ftq2" podStartSLOduration=97.995357932 podStartE2EDuration="1m37.995357932s" podCreationTimestamp="2025-12-05 10:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:33.993041819 +0000 UTC m=+120.281147333" watchObservedRunningTime="2025-12-05 10:29:33.995357932 +0000 UTC m=+120.283463445" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.010628 4796 patch_prober.go:28] interesting pod/router-default-5444994796-zsscn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 10:29:34 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 05 10:29:34 crc kubenswrapper[4796]: [+]process-running ok Dec 05 10:29:34 crc kubenswrapper[4796]: healthz check failed Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.010675 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zsscn" podUID="8c481089-1d82-466f-b015-541a729f07b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.031215 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lxxz8" podStartSLOduration=98.031197495 podStartE2EDuration="1m38.031197495s" podCreationTimestamp="2025-12-05 10:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:34.030805529 +0000 UTC m=+120.318911042" watchObservedRunningTime="2025-12-05 10:29:34.031197495 +0000 UTC m=+120.319303009" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.070959 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42nbd" podStartSLOduration=99.070941127 podStartE2EDuration="1m39.070941127s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:34.070029393 +0000 UTC m=+120.358134906" watchObservedRunningTime="2025-12-05 10:29:34.070941127 +0000 UTC m=+120.359046639" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.072160 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:34 crc kubenswrapper[4796]: E1205 10:29:34.072600 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:34.572585997 +0000 UTC m=+120.860691511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.137285 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-hktfd" podStartSLOduration=7.137269148 podStartE2EDuration="7.137269148s" podCreationTimestamp="2025-12-05 10:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:34.106762567 +0000 UTC m=+120.394868080" watchObservedRunningTime="2025-12-05 10:29:34.137269148 +0000 UTC m=+120.425374661" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.138588 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" podStartSLOduration=98.138579861 podStartE2EDuration="1m38.138579861s" podCreationTimestamp="2025-12-05 10:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:34.136138793 +0000 UTC m=+120.424244306" watchObservedRunningTime="2025-12-05 10:29:34.138579861 +0000 UTC m=+120.426685374" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.173633 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:34 crc kubenswrapper[4796]: E1205 10:29:34.174615 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:34.674604412 +0000 UTC m=+120.962709916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.179923 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-ngdm4" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.196168 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-dktck" podStartSLOduration=99.196152979 podStartE2EDuration="1m39.196152979s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:34.166239612 +0000 UTC m=+120.454345125" watchObservedRunningTime="2025-12-05 10:29:34.196152979 +0000 UTC m=+120.484258492" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.215821 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-9fpsf" podStartSLOduration=7.215806554 podStartE2EDuration="7.215806554s" podCreationTimestamp="2025-12-05 10:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:34.195620999 +0000 UTC m=+120.483726512" watchObservedRunningTime="2025-12-05 10:29:34.215806554 +0000 UTC m=+120.503912067" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.253926 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blx8z" podStartSLOduration=99.253910093 podStartE2EDuration="1m39.253910093s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:34.25214173 +0000 UTC m=+120.540247243" watchObservedRunningTime="2025-12-05 10:29:34.253910093 +0000 UTC m=+120.542015606" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.275290 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:34 crc kubenswrapper[4796]: E1205 10:29:34.275654 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:34.775637316 +0000 UTC m=+121.063742829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.376432 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:34 crc kubenswrapper[4796]: E1205 10:29:34.377050 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:34.877035384 +0000 UTC m=+121.165140898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.477914 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:34 crc kubenswrapper[4796]: E1205 10:29:34.478069 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:34.978046146 +0000 UTC m=+121.266151659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.478112 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:34 crc kubenswrapper[4796]: E1205 10:29:34.478596 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:34.978590199 +0000 UTC m=+121.266695712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.538021 4796 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.579339 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:34 crc kubenswrapper[4796]: E1205 10:29:34.580078 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:35.080062618 +0000 UTC m=+121.368168131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.654126 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-w7mdv" event={"ID":"a093a74f-f4f8-43ff-9f16-232761ad58e0","Type":"ContainerStarted","Data":"d42cbde1b09fdfcfb0c0e934654fc3d162a025798f3f5006a465d3996af183f6"} Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.667987 4796 generic.go:334] "Generic (PLEG): container finished" podID="f9ebd86c-0c38-4954-8c59-a4e0168fb2d5" containerID="75be366635200236d7249af0c8be43ab5f0ec4777402f2f461c2e94a8a1b0278" exitCode=0 Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.668042 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415495-lj8xl" event={"ID":"f9ebd86c-0c38-4954-8c59-a4e0168fb2d5","Type":"ContainerDied","Data":"75be366635200236d7249af0c8be43ab5f0ec4777402f2f461c2e94a8a1b0278"} Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.680926 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:34 crc kubenswrapper[4796]: E1205 10:29:34.681282 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:35.181271341 +0000 UTC m=+121.469376854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.684031 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zptnp"] Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.684816 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zptnp" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.685748 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n87bt" event={"ID":"9a043ce0-300f-49b2-9f8f-30497aa70426","Type":"ContainerStarted","Data":"8190e9e104e57b8592136e7cc8924934d58921bf8e7e08f6aa180a191036e4c1"} Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.685773 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n87bt" event={"ID":"9a043ce0-300f-49b2-9f8f-30497aa70426","Type":"ContainerStarted","Data":"623703d1458c196f7d3476160361707b053b314f37baa006132115bccdaef3ab"} Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.691990 4796 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gmlb4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.692044 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gmlb4" podUID="356c2f0d-a078-42e4-925a-e4f39864eb48" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.25:8080/healthz\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.692449 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.710933 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-42nbd" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.714726 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zptnp"] Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.781861 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:34 crc kubenswrapper[4796]: E1205 10:29:34.782044 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:35.282006263 +0000 UTC m=+121.570111777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.782368 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b4178e4-10a3-4011-8994-b7ca6f64b45d-catalog-content\") pod \"certified-operators-zptnp\" (UID: \"9b4178e4-10a3-4011-8994-b7ca6f64b45d\") " pod="openshift-marketplace/certified-operators-zptnp" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.782746 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b4178e4-10a3-4011-8994-b7ca6f64b45d-utilities\") pod \"certified-operators-zptnp\" (UID: \"9b4178e4-10a3-4011-8994-b7ca6f64b45d\") " pod="openshift-marketplace/certified-operators-zptnp" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.782858 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpjsg\" (UniqueName: \"kubernetes.io/projected/9b4178e4-10a3-4011-8994-b7ca6f64b45d-kube-api-access-lpjsg\") pod \"certified-operators-zptnp\" (UID: \"9b4178e4-10a3-4011-8994-b7ca6f64b45d\") " pod="openshift-marketplace/certified-operators-zptnp" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.783097 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:34 crc kubenswrapper[4796]: E1205 10:29:34.788064 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:35.28805442 +0000 UTC m=+121.576159923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.868450 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.870261 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.870449 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.870910 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.875176 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-snx4m"] Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.877601 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snx4m" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.877704 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-snx4m"] Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.880091 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.886505 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.886606 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:34 crc kubenswrapper[4796]: E1205 10:29:34.886975 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:35.386962379 +0000 UTC m=+121.675067892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.888178 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b4178e4-10a3-4011-8994-b7ca6f64b45d-utilities\") pod \"certified-operators-zptnp\" (UID: \"9b4178e4-10a3-4011-8994-b7ca6f64b45d\") " pod="openshift-marketplace/certified-operators-zptnp" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.888217 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpjsg\" (UniqueName: \"kubernetes.io/projected/9b4178e4-10a3-4011-8994-b7ca6f64b45d-kube-api-access-lpjsg\") pod \"certified-operators-zptnp\" (UID: \"9b4178e4-10a3-4011-8994-b7ca6f64b45d\") " pod="openshift-marketplace/certified-operators-zptnp" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.888275 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.888323 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b4178e4-10a3-4011-8994-b7ca6f64b45d-catalog-content\") pod \"certified-operators-zptnp\" (UID: \"9b4178e4-10a3-4011-8994-b7ca6f64b45d\") " pod="openshift-marketplace/certified-operators-zptnp" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.888771 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b4178e4-10a3-4011-8994-b7ca6f64b45d-catalog-content\") pod \"certified-operators-zptnp\" (UID: \"9b4178e4-10a3-4011-8994-b7ca6f64b45d\") " pod="openshift-marketplace/certified-operators-zptnp" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.888978 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b4178e4-10a3-4011-8994-b7ca6f64b45d-utilities\") pod \"certified-operators-zptnp\" (UID: \"9b4178e4-10a3-4011-8994-b7ca6f64b45d\") " pod="openshift-marketplace/certified-operators-zptnp" Dec 05 10:29:34 crc kubenswrapper[4796]: E1205 10:29:34.889322 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:35.389314138 +0000 UTC m=+121.677419652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.913498 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpjsg\" (UniqueName: \"kubernetes.io/projected/9b4178e4-10a3-4011-8994-b7ca6f64b45d-kube-api-access-lpjsg\") pod \"certified-operators-zptnp\" (UID: \"9b4178e4-10a3-4011-8994-b7ca6f64b45d\") " pod="openshift-marketplace/certified-operators-zptnp" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.989732 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.989881 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49519f1b-a5e0-4d0f-bf1d-d6927a8f0957-utilities\") pod \"community-operators-snx4m\" (UID: \"49519f1b-a5e0-4d0f-bf1d-d6927a8f0957\") " pod="openshift-marketplace/community-operators-snx4m" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.989966 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxtsx\" (UniqueName: \"kubernetes.io/projected/49519f1b-a5e0-4d0f-bf1d-d6927a8f0957-kube-api-access-mxtsx\") pod \"community-operators-snx4m\" (UID: \"49519f1b-a5e0-4d0f-bf1d-d6927a8f0957\") " pod="openshift-marketplace/community-operators-snx4m" Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.990000 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49519f1b-a5e0-4d0f-bf1d-d6927a8f0957-catalog-content\") pod \"community-operators-snx4m\" (UID: \"49519f1b-a5e0-4d0f-bf1d-d6927a8f0957\") " pod="openshift-marketplace/community-operators-snx4m" Dec 05 10:29:34 crc kubenswrapper[4796]: E1205 10:29:34.990095 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-05 10:29:35.490081673 +0000 UTC m=+121.778187186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.998246 4796 patch_prober.go:28] interesting pod/router-default-5444994796-zsscn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 10:29:34 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 05 10:29:34 crc kubenswrapper[4796]: [+]process-running ok Dec 05 10:29:34 crc kubenswrapper[4796]: healthz check failed Dec 05 10:29:34 crc kubenswrapper[4796]: I1205 10:29:34.998323 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zsscn" podUID="8c481089-1d82-466f-b015-541a729f07b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.018837 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zptnp" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.056200 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gln54"] Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.057042 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gln54" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.065060 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gln54"] Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.091308 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49519f1b-a5e0-4d0f-bf1d-d6927a8f0957-utilities\") pod \"community-operators-snx4m\" (UID: \"49519f1b-a5e0-4d0f-bf1d-d6927a8f0957\") " pod="openshift-marketplace/community-operators-snx4m" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.091394 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.091437 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxtsx\" (UniqueName: \"kubernetes.io/projected/49519f1b-a5e0-4d0f-bf1d-d6927a8f0957-kube-api-access-mxtsx\") pod \"community-operators-snx4m\" (UID: \"49519f1b-a5e0-4d0f-bf1d-d6927a8f0957\") " pod="openshift-marketplace/community-operators-snx4m" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.091480 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49519f1b-a5e0-4d0f-bf1d-d6927a8f0957-catalog-content\") pod \"community-operators-snx4m\" (UID: \"49519f1b-a5e0-4d0f-bf1d-d6927a8f0957\") " pod="openshift-marketplace/community-operators-snx4m" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.091916 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49519f1b-a5e0-4d0f-bf1d-d6927a8f0957-catalog-content\") pod \"community-operators-snx4m\" (UID: \"49519f1b-a5e0-4d0f-bf1d-d6927a8f0957\") " pod="openshift-marketplace/community-operators-snx4m" Dec 05 10:29:35 crc kubenswrapper[4796]: E1205 10:29:35.092204 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-05 10:29:35.59219148 +0000 UTC m=+121.880296993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k9j8l" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.092559 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49519f1b-a5e0-4d0f-bf1d-d6927a8f0957-utilities\") pod \"community-operators-snx4m\" (UID: \"49519f1b-a5e0-4d0f-bf1d-d6927a8f0957\") " pod="openshift-marketplace/community-operators-snx4m" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.108361 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.109017 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.111378 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.111597 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.126163 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxtsx\" (UniqueName: \"kubernetes.io/projected/49519f1b-a5e0-4d0f-bf1d-d6927a8f0957-kube-api-access-mxtsx\") pod \"community-operators-snx4m\" (UID: \"49519f1b-a5e0-4d0f-bf1d-d6927a8f0957\") " pod="openshift-marketplace/community-operators-snx4m" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.149904 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.157733 4796 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-05T10:29:34.538056224Z","Handler":null,"Name":""} Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.161704 4796 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.161726 4796 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.191573 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snx4m" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.192775 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.193799 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36e71a2b-e359-4240-b8d0-cf1fbe6f7e52-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"36e71a2b-e359-4240-b8d0-cf1fbe6f7e52\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.193877 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/703b0763-b2db-4507-a779-65789d6cba65-catalog-content\") pod \"certified-operators-gln54\" (UID: \"703b0763-b2db-4507-a779-65789d6cba65\") " pod="openshift-marketplace/certified-operators-gln54" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.193935 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36e71a2b-e359-4240-b8d0-cf1fbe6f7e52-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"36e71a2b-e359-4240-b8d0-cf1fbe6f7e52\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.193975 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/703b0763-b2db-4507-a779-65789d6cba65-utilities\") pod \"certified-operators-gln54\" (UID: \"703b0763-b2db-4507-a779-65789d6cba65\") " pod="openshift-marketplace/certified-operators-gln54" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.194123 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2p4m\" (UniqueName: \"kubernetes.io/projected/703b0763-b2db-4507-a779-65789d6cba65-kube-api-access-m2p4m\") pod \"certified-operators-gln54\" (UID: \"703b0763-b2db-4507-a779-65789d6cba65\") " pod="openshift-marketplace/certified-operators-gln54" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.197940 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.224904 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zptnp"] Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.253742 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lrz7k"] Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.254573 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lrz7k" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.267494 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lrz7k"] Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.294985 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2p4m\" (UniqueName: \"kubernetes.io/projected/703b0763-b2db-4507-a779-65789d6cba65-kube-api-access-m2p4m\") pod \"certified-operators-gln54\" (UID: \"703b0763-b2db-4507-a779-65789d6cba65\") " pod="openshift-marketplace/certified-operators-gln54" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.295027 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.295064 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36e71a2b-e359-4240-b8d0-cf1fbe6f7e52-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"36e71a2b-e359-4240-b8d0-cf1fbe6f7e52\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.295094 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/703b0763-b2db-4507-a779-65789d6cba65-catalog-content\") pod \"certified-operators-gln54\" (UID: \"703b0763-b2db-4507-a779-65789d6cba65\") " pod="openshift-marketplace/certified-operators-gln54" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.295109 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36e71a2b-e359-4240-b8d0-cf1fbe6f7e52-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"36e71a2b-e359-4240-b8d0-cf1fbe6f7e52\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.295129 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/703b0763-b2db-4507-a779-65789d6cba65-utilities\") pod \"certified-operators-gln54\" (UID: \"703b0763-b2db-4507-a779-65789d6cba65\") " pod="openshift-marketplace/certified-operators-gln54" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.295795 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/703b0763-b2db-4507-a779-65789d6cba65-utilities\") pod \"certified-operators-gln54\" (UID: \"703b0763-b2db-4507-a779-65789d6cba65\") " pod="openshift-marketplace/certified-operators-gln54" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.295835 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36e71a2b-e359-4240-b8d0-cf1fbe6f7e52-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"36e71a2b-e359-4240-b8d0-cf1fbe6f7e52\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.296081 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/703b0763-b2db-4507-a779-65789d6cba65-catalog-content\") pod \"certified-operators-gln54\" (UID: \"703b0763-b2db-4507-a779-65789d6cba65\") " pod="openshift-marketplace/certified-operators-gln54" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.300438 4796 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.300467 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.310222 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36e71a2b-e359-4240-b8d0-cf1fbe6f7e52-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"36e71a2b-e359-4240-b8d0-cf1fbe6f7e52\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.312227 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2p4m\" (UniqueName: \"kubernetes.io/projected/703b0763-b2db-4507-a779-65789d6cba65-kube-api-access-m2p4m\") pod \"certified-operators-gln54\" (UID: \"703b0763-b2db-4507-a779-65789d6cba65\") " pod="openshift-marketplace/certified-operators-gln54" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.318206 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k9j8l\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.328044 4796 patch_prober.go:28] interesting pod/apiserver-76f77b778f-v4phj container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 05 10:29:35 crc kubenswrapper[4796]: [+]log ok Dec 05 10:29:35 crc kubenswrapper[4796]: [+]etcd ok Dec 05 10:29:35 crc kubenswrapper[4796]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 05 10:29:35 crc kubenswrapper[4796]: [+]poststarthook/generic-apiserver-start-informers ok Dec 05 10:29:35 crc kubenswrapper[4796]: [+]poststarthook/max-in-flight-filter ok Dec 05 10:29:35 crc kubenswrapper[4796]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 05 10:29:35 crc kubenswrapper[4796]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 05 10:29:35 crc kubenswrapper[4796]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 05 10:29:35 crc kubenswrapper[4796]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 05 10:29:35 crc kubenswrapper[4796]: [+]poststarthook/project.openshift.io-projectcache ok Dec 05 10:29:35 crc kubenswrapper[4796]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 05 10:29:35 crc kubenswrapper[4796]: [+]poststarthook/openshift.io-startinformers ok Dec 05 10:29:35 crc kubenswrapper[4796]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 05 10:29:35 crc kubenswrapper[4796]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 05 10:29:35 crc kubenswrapper[4796]: livez check failed Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.328085 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-v4phj" podUID="93a93c24-45ae-43e1-9349-7471c6a218f8" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.368363 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gln54" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.369846 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-snx4m"] Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.396343 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/818eb476-c48b-40fc-9dae-e3d7af8f8f25-utilities\") pod \"community-operators-lrz7k\" (UID: \"818eb476-c48b-40fc-9dae-e3d7af8f8f25\") " pod="openshift-marketplace/community-operators-lrz7k" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.396382 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/818eb476-c48b-40fc-9dae-e3d7af8f8f25-catalog-content\") pod \"community-operators-lrz7k\" (UID: \"818eb476-c48b-40fc-9dae-e3d7af8f8f25\") " pod="openshift-marketplace/community-operators-lrz7k" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.396419 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csgjr\" (UniqueName: \"kubernetes.io/projected/818eb476-c48b-40fc-9dae-e3d7af8f8f25-kube-api-access-csgjr\") pod \"community-operators-lrz7k\" (UID: \"818eb476-c48b-40fc-9dae-e3d7af8f8f25\") " pod="openshift-marketplace/community-operators-lrz7k" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.411282 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.420477 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.436631 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.497841 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/818eb476-c48b-40fc-9dae-e3d7af8f8f25-utilities\") pod \"community-operators-lrz7k\" (UID: \"818eb476-c48b-40fc-9dae-e3d7af8f8f25\") " pod="openshift-marketplace/community-operators-lrz7k" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.497874 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/818eb476-c48b-40fc-9dae-e3d7af8f8f25-catalog-content\") pod \"community-operators-lrz7k\" (UID: \"818eb476-c48b-40fc-9dae-e3d7af8f8f25\") " pod="openshift-marketplace/community-operators-lrz7k" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.497909 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csgjr\" (UniqueName: \"kubernetes.io/projected/818eb476-c48b-40fc-9dae-e3d7af8f8f25-kube-api-access-csgjr\") pod \"community-operators-lrz7k\" (UID: \"818eb476-c48b-40fc-9dae-e3d7af8f8f25\") " pod="openshift-marketplace/community-operators-lrz7k" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.498322 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/818eb476-c48b-40fc-9dae-e3d7af8f8f25-utilities\") pod \"community-operators-lrz7k\" (UID: \"818eb476-c48b-40fc-9dae-e3d7af8f8f25\") " pod="openshift-marketplace/community-operators-lrz7k" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.498535 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/818eb476-c48b-40fc-9dae-e3d7af8f8f25-catalog-content\") pod \"community-operators-lrz7k\" (UID: \"818eb476-c48b-40fc-9dae-e3d7af8f8f25\") " pod="openshift-marketplace/community-operators-lrz7k" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.519168 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csgjr\" (UniqueName: \"kubernetes.io/projected/818eb476-c48b-40fc-9dae-e3d7af8f8f25-kube-api-access-csgjr\") pod \"community-operators-lrz7k\" (UID: \"818eb476-c48b-40fc-9dae-e3d7af8f8f25\") " pod="openshift-marketplace/community-operators-lrz7k" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.537658 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gln54"] Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.578021 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-k9j8l"] Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.581423 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lrz7k" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.615533 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.700016 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gln54" event={"ID":"703b0763-b2db-4507-a779-65789d6cba65","Type":"ContainerStarted","Data":"01272087109f13635e5f69fb0f4d00f63b8c7d3a29f3ca3fb922643a343af958"} Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.700072 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gln54" event={"ID":"703b0763-b2db-4507-a779-65789d6cba65","Type":"ContainerStarted","Data":"e6b5bd358b45b653e5222126be9f30bdd7b2cdd974a78ec46d51e9415763fdd3"} Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.702329 4796 generic.go:334] "Generic (PLEG): container finished" podID="49519f1b-a5e0-4d0f-bf1d-d6927a8f0957" containerID="16a001201e88293b084dca9882cc2e113c456fba34cdbfe54b1c210e289c1fc8" exitCode=0 Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.702558 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snx4m" event={"ID":"49519f1b-a5e0-4d0f-bf1d-d6927a8f0957","Type":"ContainerDied","Data":"16a001201e88293b084dca9882cc2e113c456fba34cdbfe54b1c210e289c1fc8"} Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.703085 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snx4m" event={"ID":"49519f1b-a5e0-4d0f-bf1d-d6927a8f0957","Type":"ContainerStarted","Data":"68281d0229588fdbd09c9684dd6c31af8d2d0684cb8293a9242bb9f08cdedbbb"} Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.703938 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.706530 4796 generic.go:334] "Generic (PLEG): container finished" podID="9b4178e4-10a3-4011-8994-b7ca6f64b45d" containerID="15433742bd388422a2b410b19d79ce6d70d09a617e8b44135e4b918e1ee6d7d1" exitCode=0 Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.706589 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zptnp" event={"ID":"9b4178e4-10a3-4011-8994-b7ca6f64b45d","Type":"ContainerDied","Data":"15433742bd388422a2b410b19d79ce6d70d09a617e8b44135e4b918e1ee6d7d1"} Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.706615 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zptnp" event={"ID":"9b4178e4-10a3-4011-8994-b7ca6f64b45d","Type":"ContainerStarted","Data":"144b4763b97f1b35f1ecb9d83b9619775887c27c44ee897ae5a620a4771b9c09"} Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.711749 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-n87bt" event={"ID":"9a043ce0-300f-49b2-9f8f-30497aa70426","Type":"ContainerStarted","Data":"b62f48e8b5119a3b887c5fa6cfd50d709339b322980ca7848193a9baa4670561"} Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.738791 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" event={"ID":"b5fec51d-bef3-426c-ba74-90a48a94d9ce","Type":"ContainerStarted","Data":"9708a6c108871f89caa1bd938519c44bc3bdd3e8abe9073b92d7e6f1d034828f"} Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.743466 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"36e71a2b-e359-4240-b8d0-cf1fbe6f7e52","Type":"ContainerStarted","Data":"b22100d009bc7d44557bb014b9afae7a89741ec133b05e960e9275656c4b0074"} Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.749807 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-pkl5m" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.759982 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-n87bt" podStartSLOduration=8.759967523 podStartE2EDuration="8.759967523s" podCreationTimestamp="2025-12-05 10:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:35.758893164 +0000 UTC m=+122.046998687" watchObservedRunningTime="2025-12-05 10:29:35.759967523 +0000 UTC m=+122.048073036" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.935132 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415495-lj8xl" Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.956735 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lrz7k"] Dec 05 10:29:35 crc kubenswrapper[4796]: W1205 10:29:35.959517 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod818eb476_c48b_40fc_9dae_e3d7af8f8f25.slice/crio-6ad3919a325262ce010f1afb0c45745a43a68aa3a4a9ec0dce94b270fafec061 WatchSource:0}: Error finding container 6ad3919a325262ce010f1afb0c45745a43a68aa3a4a9ec0dce94b270fafec061: Status 404 returned error can't find the container with id 6ad3919a325262ce010f1afb0c45745a43a68aa3a4a9ec0dce94b270fafec061 Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.997808 4796 patch_prober.go:28] interesting pod/router-default-5444994796-zsscn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 10:29:35 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 05 10:29:35 crc kubenswrapper[4796]: [+]process-running ok Dec 05 10:29:35 crc kubenswrapper[4796]: healthz check failed Dec 05 10:29:35 crc kubenswrapper[4796]: I1205 10:29:35.997906 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zsscn" podUID="8c481089-1d82-466f-b015-541a729f07b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.010159 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f9ebd86c-0c38-4954-8c59-a4e0168fb2d5-secret-volume\") pod \"f9ebd86c-0c38-4954-8c59-a4e0168fb2d5\" (UID: \"f9ebd86c-0c38-4954-8c59-a4e0168fb2d5\") " Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.010264 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9ebd86c-0c38-4954-8c59-a4e0168fb2d5-config-volume\") pod \"f9ebd86c-0c38-4954-8c59-a4e0168fb2d5\" (UID: \"f9ebd86c-0c38-4954-8c59-a4e0168fb2d5\") " Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.010322 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hp7t\" (UniqueName: \"kubernetes.io/projected/f9ebd86c-0c38-4954-8c59-a4e0168fb2d5-kube-api-access-4hp7t\") pod \"f9ebd86c-0c38-4954-8c59-a4e0168fb2d5\" (UID: \"f9ebd86c-0c38-4954-8c59-a4e0168fb2d5\") " Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.010648 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9ebd86c-0c38-4954-8c59-a4e0168fb2d5-config-volume" (OuterVolumeSpecName: "config-volume") pod "f9ebd86c-0c38-4954-8c59-a4e0168fb2d5" (UID: "f9ebd86c-0c38-4954-8c59-a4e0168fb2d5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.015851 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9ebd86c-0c38-4954-8c59-a4e0168fb2d5-kube-api-access-4hp7t" (OuterVolumeSpecName: "kube-api-access-4hp7t") pod "f9ebd86c-0c38-4954-8c59-a4e0168fb2d5" (UID: "f9ebd86c-0c38-4954-8c59-a4e0168fb2d5"). InnerVolumeSpecName "kube-api-access-4hp7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.016238 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9ebd86c-0c38-4954-8c59-a4e0168fb2d5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f9ebd86c-0c38-4954-8c59-a4e0168fb2d5" (UID: "f9ebd86c-0c38-4954-8c59-a4e0168fb2d5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.037324 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.111403 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9ebd86c-0c38-4954-8c59-a4e0168fb2d5-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.111648 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hp7t\" (UniqueName: \"kubernetes.io/projected/f9ebd86c-0c38-4954-8c59-a4e0168fb2d5-kube-api-access-4hp7t\") on node \"crc\" DevicePath \"\"" Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.111659 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f9ebd86c-0c38-4954-8c59-a4e0168fb2d5-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.750896 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415495-lj8xl" Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.750887 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415495-lj8xl" event={"ID":"f9ebd86c-0c38-4954-8c59-a4e0168fb2d5","Type":"ContainerDied","Data":"0ed2a86acb7cfd7d33835bbceddc5c1f3460a93c0a8ed306df86ebf7d73e8b02"} Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.751038 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ed2a86acb7cfd7d33835bbceddc5c1f3460a93c0a8ed306df86ebf7d73e8b02" Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.752874 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" event={"ID":"b5fec51d-bef3-426c-ba74-90a48a94d9ce","Type":"ContainerStarted","Data":"4aa8ad3a55294c79cf4f2e9f09944105ff3e4c89815d9978f487a54aff31dcf0"} Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.752914 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.754985 4796 generic.go:334] "Generic (PLEG): container finished" podID="36e71a2b-e359-4240-b8d0-cf1fbe6f7e52" containerID="da61a44832b1f9596dbaf009036200a904e6c4304899e9ee4ad0defc8d8cca2d" exitCode=0 Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.755035 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"36e71a2b-e359-4240-b8d0-cf1fbe6f7e52","Type":"ContainerDied","Data":"da61a44832b1f9596dbaf009036200a904e6c4304899e9ee4ad0defc8d8cca2d"} Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.759642 4796 generic.go:334] "Generic (PLEG): container finished" podID="818eb476-c48b-40fc-9dae-e3d7af8f8f25" containerID="bfb301cd79344fd373c936dac7f76aa26ec79274d0b80e6313817c410f755167" exitCode=0 Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.759708 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrz7k" event={"ID":"818eb476-c48b-40fc-9dae-e3d7af8f8f25","Type":"ContainerDied","Data":"bfb301cd79344fd373c936dac7f76aa26ec79274d0b80e6313817c410f755167"} Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.759726 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrz7k" event={"ID":"818eb476-c48b-40fc-9dae-e3d7af8f8f25","Type":"ContainerStarted","Data":"6ad3919a325262ce010f1afb0c45745a43a68aa3a4a9ec0dce94b270fafec061"} Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.761171 4796 generic.go:334] "Generic (PLEG): container finished" podID="703b0763-b2db-4507-a779-65789d6cba65" containerID="01272087109f13635e5f69fb0f4d00f63b8c7d3a29f3ca3fb922643a343af958" exitCode=0 Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.761286 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gln54" event={"ID":"703b0763-b2db-4507-a779-65789d6cba65","Type":"ContainerDied","Data":"01272087109f13635e5f69fb0f4d00f63b8c7d3a29f3ca3fb922643a343af958"} Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.768726 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" podStartSLOduration=101.768710159 podStartE2EDuration="1m41.768710159s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:36.765963237 +0000 UTC m=+123.054068751" watchObservedRunningTime="2025-12-05 10:29:36.768710159 +0000 UTC m=+123.056815672" Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.857164 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9xblc"] Dec 05 10:29:36 crc kubenswrapper[4796]: E1205 10:29:36.857849 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9ebd86c-0c38-4954-8c59-a4e0168fb2d5" containerName="collect-profiles" Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.857872 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9ebd86c-0c38-4954-8c59-a4e0168fb2d5" containerName="collect-profiles" Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.858134 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9ebd86c-0c38-4954-8c59-a4e0168fb2d5" containerName="collect-profiles" Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.859676 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9xblc" Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.863676 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.873718 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9xblc"] Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.922571 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/656b901f-73ef-4d9d-adb1-a8db28382f48-catalog-content\") pod \"redhat-marketplace-9xblc\" (UID: \"656b901f-73ef-4d9d-adb1-a8db28382f48\") " pod="openshift-marketplace/redhat-marketplace-9xblc" Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.922668 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/656b901f-73ef-4d9d-adb1-a8db28382f48-utilities\") pod \"redhat-marketplace-9xblc\" (UID: \"656b901f-73ef-4d9d-adb1-a8db28382f48\") " pod="openshift-marketplace/redhat-marketplace-9xblc" Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.922800 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7s8r\" (UniqueName: \"kubernetes.io/projected/656b901f-73ef-4d9d-adb1-a8db28382f48-kube-api-access-r7s8r\") pod \"redhat-marketplace-9xblc\" (UID: \"656b901f-73ef-4d9d-adb1-a8db28382f48\") " pod="openshift-marketplace/redhat-marketplace-9xblc" Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.995538 4796 patch_prober.go:28] interesting pod/router-default-5444994796-zsscn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 10:29:36 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 05 10:29:36 crc kubenswrapper[4796]: [+]process-running ok Dec 05 10:29:36 crc kubenswrapper[4796]: healthz check failed Dec 05 10:29:36 crc kubenswrapper[4796]: I1205 10:29:36.995623 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zsscn" podUID="8c481089-1d82-466f-b015-541a729f07b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.024121 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7s8r\" (UniqueName: \"kubernetes.io/projected/656b901f-73ef-4d9d-adb1-a8db28382f48-kube-api-access-r7s8r\") pod \"redhat-marketplace-9xblc\" (UID: \"656b901f-73ef-4d9d-adb1-a8db28382f48\") " pod="openshift-marketplace/redhat-marketplace-9xblc" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.024189 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/656b901f-73ef-4d9d-adb1-a8db28382f48-catalog-content\") pod \"redhat-marketplace-9xblc\" (UID: \"656b901f-73ef-4d9d-adb1-a8db28382f48\") " pod="openshift-marketplace/redhat-marketplace-9xblc" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.024232 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/656b901f-73ef-4d9d-adb1-a8db28382f48-utilities\") pod \"redhat-marketplace-9xblc\" (UID: \"656b901f-73ef-4d9d-adb1-a8db28382f48\") " pod="openshift-marketplace/redhat-marketplace-9xblc" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.024617 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/656b901f-73ef-4d9d-adb1-a8db28382f48-utilities\") pod \"redhat-marketplace-9xblc\" (UID: \"656b901f-73ef-4d9d-adb1-a8db28382f48\") " pod="openshift-marketplace/redhat-marketplace-9xblc" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.025049 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/656b901f-73ef-4d9d-adb1-a8db28382f48-catalog-content\") pod \"redhat-marketplace-9xblc\" (UID: \"656b901f-73ef-4d9d-adb1-a8db28382f48\") " pod="openshift-marketplace/redhat-marketplace-9xblc" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.046707 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7s8r\" (UniqueName: \"kubernetes.io/projected/656b901f-73ef-4d9d-adb1-a8db28382f48-kube-api-access-r7s8r\") pod \"redhat-marketplace-9xblc\" (UID: \"656b901f-73ef-4d9d-adb1-a8db28382f48\") " pod="openshift-marketplace/redhat-marketplace-9xblc" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.176306 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9xblc" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.258608 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-plqgk"] Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.260009 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plqgk" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.271636 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-plqgk"] Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.331737 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttzr5\" (UniqueName: \"kubernetes.io/projected/873436a9-325d-4ea9-9198-e34642a13a6f-kube-api-access-ttzr5\") pod \"redhat-marketplace-plqgk\" (UID: \"873436a9-325d-4ea9-9198-e34642a13a6f\") " pod="openshift-marketplace/redhat-marketplace-plqgk" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.331836 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/873436a9-325d-4ea9-9198-e34642a13a6f-catalog-content\") pod \"redhat-marketplace-plqgk\" (UID: \"873436a9-325d-4ea9-9198-e34642a13a6f\") " pod="openshift-marketplace/redhat-marketplace-plqgk" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.332153 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/873436a9-325d-4ea9-9198-e34642a13a6f-utilities\") pod \"redhat-marketplace-plqgk\" (UID: \"873436a9-325d-4ea9-9198-e34642a13a6f\") " pod="openshift-marketplace/redhat-marketplace-plqgk" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.360241 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9xblc"] Dec 05 10:29:37 crc kubenswrapper[4796]: W1205 10:29:37.371246 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod656b901f_73ef_4d9d_adb1_a8db28382f48.slice/crio-f69508c305d32ed1564dac2a8c5830a3ce86c33f52fd1fbd7a1f599e593f5776 WatchSource:0}: Error finding container f69508c305d32ed1564dac2a8c5830a3ce86c33f52fd1fbd7a1f599e593f5776: Status 404 returned error can't find the container with id f69508c305d32ed1564dac2a8c5830a3ce86c33f52fd1fbd7a1f599e593f5776 Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.432852 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttzr5\" (UniqueName: \"kubernetes.io/projected/873436a9-325d-4ea9-9198-e34642a13a6f-kube-api-access-ttzr5\") pod \"redhat-marketplace-plqgk\" (UID: \"873436a9-325d-4ea9-9198-e34642a13a6f\") " pod="openshift-marketplace/redhat-marketplace-plqgk" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.432905 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/873436a9-325d-4ea9-9198-e34642a13a6f-catalog-content\") pod \"redhat-marketplace-plqgk\" (UID: \"873436a9-325d-4ea9-9198-e34642a13a6f\") " pod="openshift-marketplace/redhat-marketplace-plqgk" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.432990 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/873436a9-325d-4ea9-9198-e34642a13a6f-utilities\") pod \"redhat-marketplace-plqgk\" (UID: \"873436a9-325d-4ea9-9198-e34642a13a6f\") " pod="openshift-marketplace/redhat-marketplace-plqgk" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.433519 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/873436a9-325d-4ea9-9198-e34642a13a6f-catalog-content\") pod \"redhat-marketplace-plqgk\" (UID: \"873436a9-325d-4ea9-9198-e34642a13a6f\") " pod="openshift-marketplace/redhat-marketplace-plqgk" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.433799 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/873436a9-325d-4ea9-9198-e34642a13a6f-utilities\") pod \"redhat-marketplace-plqgk\" (UID: \"873436a9-325d-4ea9-9198-e34642a13a6f\") " pod="openshift-marketplace/redhat-marketplace-plqgk" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.457997 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttzr5\" (UniqueName: \"kubernetes.io/projected/873436a9-325d-4ea9-9198-e34642a13a6f-kube-api-access-ttzr5\") pod \"redhat-marketplace-plqgk\" (UID: \"873436a9-325d-4ea9-9198-e34642a13a6f\") " pod="openshift-marketplace/redhat-marketplace-plqgk" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.582161 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plqgk" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.737009 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-plqgk"] Dec 05 10:29:37 crc kubenswrapper[4796]: W1205 10:29:37.744365 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod873436a9_325d_4ea9_9198_e34642a13a6f.slice/crio-c9cf019f50042151eef3946f664d3c9bbee22c215ca44bf909fae540861b7a0e WatchSource:0}: Error finding container c9cf019f50042151eef3946f664d3c9bbee22c215ca44bf909fae540861b7a0e: Status 404 returned error can't find the container with id c9cf019f50042151eef3946f664d3c9bbee22c215ca44bf909fae540861b7a0e Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.769498 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plqgk" event={"ID":"873436a9-325d-4ea9-9198-e34642a13a6f","Type":"ContainerStarted","Data":"c9cf019f50042151eef3946f664d3c9bbee22c215ca44bf909fae540861b7a0e"} Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.772150 4796 generic.go:334] "Generic (PLEG): container finished" podID="656b901f-73ef-4d9d-adb1-a8db28382f48" containerID="3b1456a1a50a42141de95a4639c67f6897644d0b16b77b2353f4c43e12ef1945" exitCode=0 Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.772204 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9xblc" event={"ID":"656b901f-73ef-4d9d-adb1-a8db28382f48","Type":"ContainerDied","Data":"3b1456a1a50a42141de95a4639c67f6897644d0b16b77b2353f4c43e12ef1945"} Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.772251 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9xblc" event={"ID":"656b901f-73ef-4d9d-adb1-a8db28382f48","Type":"ContainerStarted","Data":"f69508c305d32ed1564dac2a8c5830a3ce86c33f52fd1fbd7a1f599e593f5776"} Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.857710 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8gj86"] Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.858858 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gj86" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.860754 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.862588 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8gj86"] Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.941935 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2786ac62-1a3d-46fe-951c-be542f08bf55-catalog-content\") pod \"redhat-operators-8gj86\" (UID: \"2786ac62-1a3d-46fe-951c-be542f08bf55\") " pod="openshift-marketplace/redhat-operators-8gj86" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.941980 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2786ac62-1a3d-46fe-951c-be542f08bf55-utilities\") pod \"redhat-operators-8gj86\" (UID: \"2786ac62-1a3d-46fe-951c-be542f08bf55\") " pod="openshift-marketplace/redhat-operators-8gj86" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.942059 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvz6r\" (UniqueName: \"kubernetes.io/projected/2786ac62-1a3d-46fe-951c-be542f08bf55-kube-api-access-pvz6r\") pod \"redhat-operators-8gj86\" (UID: \"2786ac62-1a3d-46fe-951c-be542f08bf55\") " pod="openshift-marketplace/redhat-operators-8gj86" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.944995 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.995550 4796 patch_prober.go:28] interesting pod/router-default-5444994796-zsscn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 10:29:37 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 05 10:29:37 crc kubenswrapper[4796]: [+]process-running ok Dec 05 10:29:37 crc kubenswrapper[4796]: healthz check failed Dec 05 10:29:37 crc kubenswrapper[4796]: I1205 10:29:37.995606 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zsscn" podUID="8c481089-1d82-466f-b015-541a729f07b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.043100 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36e71a2b-e359-4240-b8d0-cf1fbe6f7e52-kube-api-access\") pod \"36e71a2b-e359-4240-b8d0-cf1fbe6f7e52\" (UID: \"36e71a2b-e359-4240-b8d0-cf1fbe6f7e52\") " Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.043139 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36e71a2b-e359-4240-b8d0-cf1fbe6f7e52-kubelet-dir\") pod \"36e71a2b-e359-4240-b8d0-cf1fbe6f7e52\" (UID: \"36e71a2b-e359-4240-b8d0-cf1fbe6f7e52\") " Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.043352 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvz6r\" (UniqueName: \"kubernetes.io/projected/2786ac62-1a3d-46fe-951c-be542f08bf55-kube-api-access-pvz6r\") pod \"redhat-operators-8gj86\" (UID: \"2786ac62-1a3d-46fe-951c-be542f08bf55\") " pod="openshift-marketplace/redhat-operators-8gj86" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.043451 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36e71a2b-e359-4240-b8d0-cf1fbe6f7e52-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "36e71a2b-e359-4240-b8d0-cf1fbe6f7e52" (UID: "36e71a2b-e359-4240-b8d0-cf1fbe6f7e52"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.043476 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2786ac62-1a3d-46fe-951c-be542f08bf55-catalog-content\") pod \"redhat-operators-8gj86\" (UID: \"2786ac62-1a3d-46fe-951c-be542f08bf55\") " pod="openshift-marketplace/redhat-operators-8gj86" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.043498 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2786ac62-1a3d-46fe-951c-be542f08bf55-utilities\") pod \"redhat-operators-8gj86\" (UID: \"2786ac62-1a3d-46fe-951c-be542f08bf55\") " pod="openshift-marketplace/redhat-operators-8gj86" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.043568 4796 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36e71a2b-e359-4240-b8d0-cf1fbe6f7e52-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.044073 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2786ac62-1a3d-46fe-951c-be542f08bf55-utilities\") pod \"redhat-operators-8gj86\" (UID: \"2786ac62-1a3d-46fe-951c-be542f08bf55\") " pod="openshift-marketplace/redhat-operators-8gj86" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.044112 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2786ac62-1a3d-46fe-951c-be542f08bf55-catalog-content\") pod \"redhat-operators-8gj86\" (UID: \"2786ac62-1a3d-46fe-951c-be542f08bf55\") " pod="openshift-marketplace/redhat-operators-8gj86" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.047916 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36e71a2b-e359-4240-b8d0-cf1fbe6f7e52-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "36e71a2b-e359-4240-b8d0-cf1fbe6f7e52" (UID: "36e71a2b-e359-4240-b8d0-cf1fbe6f7e52"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.056664 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvz6r\" (UniqueName: \"kubernetes.io/projected/2786ac62-1a3d-46fe-951c-be542f08bf55-kube-api-access-pvz6r\") pod \"redhat-operators-8gj86\" (UID: \"2786ac62-1a3d-46fe-951c-be542f08bf55\") " pod="openshift-marketplace/redhat-operators-8gj86" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.145355 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36e71a2b-e359-4240-b8d0-cf1fbe6f7e52-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.239162 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gj86" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.253907 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9gfhp"] Dec 05 10:29:38 crc kubenswrapper[4796]: E1205 10:29:38.255745 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36e71a2b-e359-4240-b8d0-cf1fbe6f7e52" containerName="pruner" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.255889 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="36e71a2b-e359-4240-b8d0-cf1fbe6f7e52" containerName="pruner" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.256072 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="36e71a2b-e359-4240-b8d0-cf1fbe6f7e52" containerName="pruner" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.257039 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gfhp" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.258602 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9gfhp"] Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.354104 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8701570-2c4b-43e6-9473-a28aae05647f-utilities\") pod \"redhat-operators-9gfhp\" (UID: \"b8701570-2c4b-43e6-9473-a28aae05647f\") " pod="openshift-marketplace/redhat-operators-9gfhp" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.354320 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8701570-2c4b-43e6-9473-a28aae05647f-catalog-content\") pod \"redhat-operators-9gfhp\" (UID: \"b8701570-2c4b-43e6-9473-a28aae05647f\") " pod="openshift-marketplace/redhat-operators-9gfhp" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.354379 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bc88\" (UniqueName: \"kubernetes.io/projected/b8701570-2c4b-43e6-9473-a28aae05647f-kube-api-access-6bc88\") pod \"redhat-operators-9gfhp\" (UID: \"b8701570-2c4b-43e6-9473-a28aae05647f\") " pod="openshift-marketplace/redhat-operators-9gfhp" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.437969 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8gj86"] Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.455727 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8701570-2c4b-43e6-9473-a28aae05647f-catalog-content\") pod \"redhat-operators-9gfhp\" (UID: \"b8701570-2c4b-43e6-9473-a28aae05647f\") " pod="openshift-marketplace/redhat-operators-9gfhp" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.455762 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bc88\" (UniqueName: \"kubernetes.io/projected/b8701570-2c4b-43e6-9473-a28aae05647f-kube-api-access-6bc88\") pod \"redhat-operators-9gfhp\" (UID: \"b8701570-2c4b-43e6-9473-a28aae05647f\") " pod="openshift-marketplace/redhat-operators-9gfhp" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.455880 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8701570-2c4b-43e6-9473-a28aae05647f-utilities\") pod \"redhat-operators-9gfhp\" (UID: \"b8701570-2c4b-43e6-9473-a28aae05647f\") " pod="openshift-marketplace/redhat-operators-9gfhp" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.456216 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8701570-2c4b-43e6-9473-a28aae05647f-catalog-content\") pod \"redhat-operators-9gfhp\" (UID: \"b8701570-2c4b-43e6-9473-a28aae05647f\") " pod="openshift-marketplace/redhat-operators-9gfhp" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.456223 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8701570-2c4b-43e6-9473-a28aae05647f-utilities\") pod \"redhat-operators-9gfhp\" (UID: \"b8701570-2c4b-43e6-9473-a28aae05647f\") " pod="openshift-marketplace/redhat-operators-9gfhp" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.471047 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bc88\" (UniqueName: \"kubernetes.io/projected/b8701570-2c4b-43e6-9473-a28aae05647f-kube-api-access-6bc88\") pod \"redhat-operators-9gfhp\" (UID: \"b8701570-2c4b-43e6-9473-a28aae05647f\") " pod="openshift-marketplace/redhat-operators-9gfhp" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.573805 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gfhp" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.778722 4796 generic.go:334] "Generic (PLEG): container finished" podID="2786ac62-1a3d-46fe-951c-be542f08bf55" containerID="8bcbb941ffd8b14338c930f985a21260f87b1f104b58553328c2d19379a20070" exitCode=0 Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.778792 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gj86" event={"ID":"2786ac62-1a3d-46fe-951c-be542f08bf55","Type":"ContainerDied","Data":"8bcbb941ffd8b14338c930f985a21260f87b1f104b58553328c2d19379a20070"} Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.778821 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gj86" event={"ID":"2786ac62-1a3d-46fe-951c-be542f08bf55","Type":"ContainerStarted","Data":"42bd1ff35b2aad79a361c64120068e49361cae84a956ce2825f9c87d1c06b743"} Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.781694 4796 generic.go:334] "Generic (PLEG): container finished" podID="873436a9-325d-4ea9-9198-e34642a13a6f" containerID="916d08a359a31691b31308f5c7ac836f77973a4ba8072e3e2d6c3db9f1e0e0f2" exitCode=0 Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.781761 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plqgk" event={"ID":"873436a9-325d-4ea9-9198-e34642a13a6f","Type":"ContainerDied","Data":"916d08a359a31691b31308f5c7ac836f77973a4ba8072e3e2d6c3db9f1e0e0f2"} Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.783365 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"36e71a2b-e359-4240-b8d0-cf1fbe6f7e52","Type":"ContainerDied","Data":"b22100d009bc7d44557bb014b9afae7a89741ec133b05e960e9275656c4b0074"} Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.783386 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b22100d009bc7d44557bb014b9afae7a89741ec133b05e960e9275656c4b0074" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.783420 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.997265 4796 patch_prober.go:28] interesting pod/router-default-5444994796-zsscn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 10:29:38 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 05 10:29:38 crc kubenswrapper[4796]: [+]process-running ok Dec 05 10:29:38 crc kubenswrapper[4796]: healthz check failed Dec 05 10:29:38 crc kubenswrapper[4796]: I1205 10:29:38.997340 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zsscn" podUID="8c481089-1d82-466f-b015-541a729f07b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 10:29:39 crc kubenswrapper[4796]: I1205 10:29:39.447172 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-jfp56" Dec 05 10:29:39 crc kubenswrapper[4796]: I1205 10:29:39.482637 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 05 10:29:39 crc kubenswrapper[4796]: I1205 10:29:39.483273 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 10:29:39 crc kubenswrapper[4796]: I1205 10:29:39.485564 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 05 10:29:39 crc kubenswrapper[4796]: I1205 10:29:39.485659 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 05 10:29:39 crc kubenswrapper[4796]: I1205 10:29:39.498799 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 05 10:29:39 crc kubenswrapper[4796]: I1205 10:29:39.568254 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a19e28bb-9b20-4d56-a8fa-4c6bf324f873-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a19e28bb-9b20-4d56-a8fa-4c6bf324f873\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 10:29:39 crc kubenswrapper[4796]: I1205 10:29:39.568413 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a19e28bb-9b20-4d56-a8fa-4c6bf324f873-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a19e28bb-9b20-4d56-a8fa-4c6bf324f873\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 10:29:39 crc kubenswrapper[4796]: I1205 10:29:39.669198 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a19e28bb-9b20-4d56-a8fa-4c6bf324f873-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a19e28bb-9b20-4d56-a8fa-4c6bf324f873\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 10:29:39 crc kubenswrapper[4796]: I1205 10:29:39.669299 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a19e28bb-9b20-4d56-a8fa-4c6bf324f873-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a19e28bb-9b20-4d56-a8fa-4c6bf324f873\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 10:29:39 crc kubenswrapper[4796]: I1205 10:29:39.669563 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a19e28bb-9b20-4d56-a8fa-4c6bf324f873-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a19e28bb-9b20-4d56-a8fa-4c6bf324f873\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 10:29:39 crc kubenswrapper[4796]: I1205 10:29:39.685212 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a19e28bb-9b20-4d56-a8fa-4c6bf324f873-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a19e28bb-9b20-4d56-a8fa-4c6bf324f873\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 10:29:39 crc kubenswrapper[4796]: I1205 10:29:39.809935 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 10:29:39 crc kubenswrapper[4796]: I1205 10:29:39.864258 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:39 crc kubenswrapper[4796]: I1205 10:29:39.868363 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-v4phj" Dec 05 10:29:39 crc kubenswrapper[4796]: I1205 10:29:39.961453 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:39 crc kubenswrapper[4796]: I1205 10:29:39.961502 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:39 crc kubenswrapper[4796]: I1205 10:29:39.964717 4796 patch_prober.go:28] interesting pod/console-f9d7485db-t4hjr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 05 10:29:39 crc kubenswrapper[4796]: I1205 10:29:39.964789 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-t4hjr" podUID="8e207003-e5a7-4cd4-a6e4-748fd8ece44b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 05 10:29:39 crc kubenswrapper[4796]: I1205 10:29:39.992998 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-zsscn" Dec 05 10:29:39 crc kubenswrapper[4796]: I1205 10:29:39.996475 4796 patch_prober.go:28] interesting pod/router-default-5444994796-zsscn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 05 10:29:39 crc kubenswrapper[4796]: [-]has-synced failed: reason withheld Dec 05 10:29:39 crc kubenswrapper[4796]: [+]process-running ok Dec 05 10:29:39 crc kubenswrapper[4796]: healthz check failed Dec 05 10:29:39 crc kubenswrapper[4796]: I1205 10:29:39.996547 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zsscn" podUID="8c481089-1d82-466f-b015-541a729f07b7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 05 10:29:40 crc kubenswrapper[4796]: I1205 10:29:40.104882 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gmlb4" Dec 05 10:29:40 crc kubenswrapper[4796]: I1205 10:29:40.995241 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-zsscn" Dec 05 10:29:40 crc kubenswrapper[4796]: I1205 10:29:40.997760 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-zsscn" Dec 05 10:29:42 crc kubenswrapper[4796]: I1205 10:29:42.016484 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9gfhp"] Dec 05 10:29:42 crc kubenswrapper[4796]: I1205 10:29:42.055486 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 05 10:29:42 crc kubenswrapper[4796]: I1205 10:29:42.134827 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-9fpsf" Dec 05 10:29:45 crc kubenswrapper[4796]: W1205 10:29:45.225311 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8701570_2c4b_43e6_9473_a28aae05647f.slice/crio-1e7124000b083d1aa3817d39b3d0bf40343f64b29fc4bb414f49b9ebc3c4f586 WatchSource:0}: Error finding container 1e7124000b083d1aa3817d39b3d0bf40343f64b29fc4bb414f49b9ebc3c4f586: Status 404 returned error can't find the container with id 1e7124000b083d1aa3817d39b3d0bf40343f64b29fc4bb414f49b9ebc3c4f586 Dec 05 10:29:45 crc kubenswrapper[4796]: W1205 10:29:45.226729 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda19e28bb_9b20_4d56_a8fa_4c6bf324f873.slice/crio-5d04bbcfd4f014471a72a2c3b55239c6d2ffcaa4bb088df66e4edf17a8a394b8 WatchSource:0}: Error finding container 5d04bbcfd4f014471a72a2c3b55239c6d2ffcaa4bb088df66e4edf17a8a394b8: Status 404 returned error can't find the container with id 5d04bbcfd4f014471a72a2c3b55239c6d2ffcaa4bb088df66e4edf17a8a394b8 Dec 05 10:29:45 crc kubenswrapper[4796]: I1205 10:29:45.826696 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gfhp" event={"ID":"b8701570-2c4b-43e6-9473-a28aae05647f","Type":"ContainerStarted","Data":"1e7124000b083d1aa3817d39b3d0bf40343f64b29fc4bb414f49b9ebc3c4f586"} Dec 05 10:29:45 crc kubenswrapper[4796]: I1205 10:29:45.828500 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a19e28bb-9b20-4d56-a8fa-4c6bf324f873","Type":"ContainerStarted","Data":"5d04bbcfd4f014471a72a2c3b55239c6d2ffcaa4bb088df66e4edf17a8a394b8"} Dec 05 10:29:49 crc kubenswrapper[4796]: I1205 10:29:49.962934 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:49 crc kubenswrapper[4796]: I1205 10:29:49.966286 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:29:51 crc kubenswrapper[4796]: I1205 10:29:51.853192 4796 generic.go:334] "Generic (PLEG): container finished" podID="873436a9-325d-4ea9-9198-e34642a13a6f" containerID="22d8d888f26dcaee47db68fffb5c5e633c28d3f11578de38987d9bd2549a177d" exitCode=0 Dec 05 10:29:51 crc kubenswrapper[4796]: I1205 10:29:51.853587 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plqgk" event={"ID":"873436a9-325d-4ea9-9198-e34642a13a6f","Type":"ContainerDied","Data":"22d8d888f26dcaee47db68fffb5c5e633c28d3f11578de38987d9bd2549a177d"} Dec 05 10:29:51 crc kubenswrapper[4796]: I1205 10:29:51.855180 4796 generic.go:334] "Generic (PLEG): container finished" podID="656b901f-73ef-4d9d-adb1-a8db28382f48" containerID="5e98a7e96ed5574b9836d27c4d0e5b3d6c666a173dc618dea4280f5e3e0ae0ee" exitCode=0 Dec 05 10:29:51 crc kubenswrapper[4796]: I1205 10:29:51.855234 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9xblc" event={"ID":"656b901f-73ef-4d9d-adb1-a8db28382f48","Type":"ContainerDied","Data":"5e98a7e96ed5574b9836d27c4d0e5b3d6c666a173dc618dea4280f5e3e0ae0ee"} Dec 05 10:29:51 crc kubenswrapper[4796]: I1205 10:29:51.857958 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gln54" event={"ID":"703b0763-b2db-4507-a779-65789d6cba65","Type":"ContainerStarted","Data":"060527c055556dfec36bd0fc1d53f0004fdf6bcc926a9ab82845d48fbdc9fa3e"} Dec 05 10:29:51 crc kubenswrapper[4796]: I1205 10:29:51.860342 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snx4m" event={"ID":"49519f1b-a5e0-4d0f-bf1d-d6927a8f0957","Type":"ContainerStarted","Data":"960448d7b5c84ac47377a140ebe2b450e422a782335ad486601cc7b2a6cc9481"} Dec 05 10:29:51 crc kubenswrapper[4796]: I1205 10:29:51.861830 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gj86" event={"ID":"2786ac62-1a3d-46fe-951c-be542f08bf55","Type":"ContainerStarted","Data":"e50cc919e537b034382656e8bd0e283cf501843e17775a4825932db2301ae888"} Dec 05 10:29:51 crc kubenswrapper[4796]: I1205 10:29:51.863364 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zptnp" event={"ID":"9b4178e4-10a3-4011-8994-b7ca6f64b45d","Type":"ContainerStarted","Data":"7fe94cf5258efb11848b1c08cf51936081f74f79da7a72e2b99faba41cbda078"} Dec 05 10:29:51 crc kubenswrapper[4796]: I1205 10:29:51.864902 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrz7k" event={"ID":"818eb476-c48b-40fc-9dae-e3d7af8f8f25","Type":"ContainerStarted","Data":"690a8c248a0c44501a0a799bdcd099d8a0f547b6265dacc155d9cb7d29b4ad8d"} Dec 05 10:29:51 crc kubenswrapper[4796]: I1205 10:29:51.865874 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a19e28bb-9b20-4d56-a8fa-4c6bf324f873","Type":"ContainerStarted","Data":"30e54058ebf7c6846e8f1f9d9c7e2966a12b2e05c4736d5af67203c7d7e1a9af"} Dec 05 10:29:51 crc kubenswrapper[4796]: I1205 10:29:51.867137 4796 generic.go:334] "Generic (PLEG): container finished" podID="b8701570-2c4b-43e6-9473-a28aae05647f" containerID="2557a068df6d53294f28365a7a5f876efe5a059b9f2cf41b4cbc2ce7ab7f9703" exitCode=0 Dec 05 10:29:51 crc kubenswrapper[4796]: I1205 10:29:51.867162 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gfhp" event={"ID":"b8701570-2c4b-43e6-9473-a28aae05647f","Type":"ContainerDied","Data":"2557a068df6d53294f28365a7a5f876efe5a059b9f2cf41b4cbc2ce7ab7f9703"} Dec 05 10:29:51 crc kubenswrapper[4796]: I1205 10:29:51.899517 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=12.899499486 podStartE2EDuration="12.899499486s" podCreationTimestamp="2025-12-05 10:29:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:29:51.896675909 +0000 UTC m=+138.184781422" watchObservedRunningTime="2025-12-05 10:29:51.899499486 +0000 UTC m=+138.187604999" Dec 05 10:29:52 crc kubenswrapper[4796]: I1205 10:29:52.880287 4796 generic.go:334] "Generic (PLEG): container finished" podID="818eb476-c48b-40fc-9dae-e3d7af8f8f25" containerID="690a8c248a0c44501a0a799bdcd099d8a0f547b6265dacc155d9cb7d29b4ad8d" exitCode=0 Dec 05 10:29:52 crc kubenswrapper[4796]: I1205 10:29:52.880507 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrz7k" event={"ID":"818eb476-c48b-40fc-9dae-e3d7af8f8f25","Type":"ContainerDied","Data":"690a8c248a0c44501a0a799bdcd099d8a0f547b6265dacc155d9cb7d29b4ad8d"} Dec 05 10:29:52 crc kubenswrapper[4796]: I1205 10:29:52.883288 4796 generic.go:334] "Generic (PLEG): container finished" podID="49519f1b-a5e0-4d0f-bf1d-d6927a8f0957" containerID="960448d7b5c84ac47377a140ebe2b450e422a782335ad486601cc7b2a6cc9481" exitCode=0 Dec 05 10:29:52 crc kubenswrapper[4796]: I1205 10:29:52.883354 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snx4m" event={"ID":"49519f1b-a5e0-4d0f-bf1d-d6927a8f0957","Type":"ContainerDied","Data":"960448d7b5c84ac47377a140ebe2b450e422a782335ad486601cc7b2a6cc9481"} Dec 05 10:29:52 crc kubenswrapper[4796]: I1205 10:29:52.886106 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plqgk" event={"ID":"873436a9-325d-4ea9-9198-e34642a13a6f","Type":"ContainerStarted","Data":"3c75d1ad6b86181d82a732bf74e1fd35495d970222cdc67196a7a6bec408e19e"} Dec 05 10:29:52 crc kubenswrapper[4796]: I1205 10:29:52.887442 4796 generic.go:334] "Generic (PLEG): container finished" podID="703b0763-b2db-4507-a779-65789d6cba65" containerID="060527c055556dfec36bd0fc1d53f0004fdf6bcc926a9ab82845d48fbdc9fa3e" exitCode=0 Dec 05 10:29:52 crc kubenswrapper[4796]: I1205 10:29:52.887496 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gln54" event={"ID":"703b0763-b2db-4507-a779-65789d6cba65","Type":"ContainerDied","Data":"060527c055556dfec36bd0fc1d53f0004fdf6bcc926a9ab82845d48fbdc9fa3e"} Dec 05 10:29:52 crc kubenswrapper[4796]: I1205 10:29:52.889661 4796 generic.go:334] "Generic (PLEG): container finished" podID="a19e28bb-9b20-4d56-a8fa-4c6bf324f873" containerID="30e54058ebf7c6846e8f1f9d9c7e2966a12b2e05c4736d5af67203c7d7e1a9af" exitCode=0 Dec 05 10:29:52 crc kubenswrapper[4796]: I1205 10:29:52.889719 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a19e28bb-9b20-4d56-a8fa-4c6bf324f873","Type":"ContainerDied","Data":"30e54058ebf7c6846e8f1f9d9c7e2966a12b2e05c4736d5af67203c7d7e1a9af"} Dec 05 10:29:52 crc kubenswrapper[4796]: I1205 10:29:52.893245 4796 generic.go:334] "Generic (PLEG): container finished" podID="2786ac62-1a3d-46fe-951c-be542f08bf55" containerID="e50cc919e537b034382656e8bd0e283cf501843e17775a4825932db2301ae888" exitCode=0 Dec 05 10:29:52 crc kubenswrapper[4796]: I1205 10:29:52.893285 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gj86" event={"ID":"2786ac62-1a3d-46fe-951c-be542f08bf55","Type":"ContainerDied","Data":"e50cc919e537b034382656e8bd0e283cf501843e17775a4825932db2301ae888"} Dec 05 10:29:52 crc kubenswrapper[4796]: I1205 10:29:52.896300 4796 generic.go:334] "Generic (PLEG): container finished" podID="9b4178e4-10a3-4011-8994-b7ca6f64b45d" containerID="7fe94cf5258efb11848b1c08cf51936081f74f79da7a72e2b99faba41cbda078" exitCode=0 Dec 05 10:29:52 crc kubenswrapper[4796]: I1205 10:29:52.896412 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zptnp" event={"ID":"9b4178e4-10a3-4011-8994-b7ca6f64b45d","Type":"ContainerDied","Data":"7fe94cf5258efb11848b1c08cf51936081f74f79da7a72e2b99faba41cbda078"} Dec 05 10:29:52 crc kubenswrapper[4796]: I1205 10:29:52.899513 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gfhp" event={"ID":"b8701570-2c4b-43e6-9473-a28aae05647f","Type":"ContainerStarted","Data":"68e649412bb6463ce6887ae7a0970d602da7f0a68543702baa87a1d8ccaa617e"} Dec 05 10:29:52 crc kubenswrapper[4796]: I1205 10:29:52.902484 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9xblc" event={"ID":"656b901f-73ef-4d9d-adb1-a8db28382f48","Type":"ContainerStarted","Data":"a71605dfe3217f3b15fc4e9076d36c1409b4eb1b3ff0e1059871bf3772406630"} Dec 05 10:29:52 crc kubenswrapper[4796]: I1205 10:29:52.923530 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-plqgk" podStartSLOduration=4.940156881 podStartE2EDuration="15.923512495s" podCreationTimestamp="2025-12-05 10:29:37 +0000 UTC" firstStartedPulling="2025-12-05 10:29:41.357738249 +0000 UTC m=+127.645843761" lastFinishedPulling="2025-12-05 10:29:52.341093862 +0000 UTC m=+138.629199375" observedRunningTime="2025-12-05 10:29:52.922413069 +0000 UTC m=+139.210518581" watchObservedRunningTime="2025-12-05 10:29:52.923512495 +0000 UTC m=+139.211618007" Dec 05 10:29:52 crc kubenswrapper[4796]: I1205 10:29:52.971659 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9xblc" podStartSLOduration=2.4334023 podStartE2EDuration="16.971644731s" podCreationTimestamp="2025-12-05 10:29:36 +0000 UTC" firstStartedPulling="2025-12-05 10:29:37.773790284 +0000 UTC m=+124.061895798" lastFinishedPulling="2025-12-05 10:29:52.312032717 +0000 UTC m=+138.600138229" observedRunningTime="2025-12-05 10:29:52.970773283 +0000 UTC m=+139.258878797" watchObservedRunningTime="2025-12-05 10:29:52.971644731 +0000 UTC m=+139.259750244" Dec 05 10:29:53 crc kubenswrapper[4796]: I1205 10:29:53.913490 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gj86" event={"ID":"2786ac62-1a3d-46fe-951c-be542f08bf55","Type":"ContainerStarted","Data":"8270e33358c8830e71bd2ed5044b195492f393282a16c3c044aee73060e751a9"} Dec 05 10:29:53 crc kubenswrapper[4796]: I1205 10:29:53.916095 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zptnp" event={"ID":"9b4178e4-10a3-4011-8994-b7ca6f64b45d","Type":"ContainerStarted","Data":"4bbd9db58b22a6fee8f59144defafcc2e06243e23093334fa25e1cd4bd2444b2"} Dec 05 10:29:53 crc kubenswrapper[4796]: I1205 10:29:53.917924 4796 generic.go:334] "Generic (PLEG): container finished" podID="b8701570-2c4b-43e6-9473-a28aae05647f" containerID="68e649412bb6463ce6887ae7a0970d602da7f0a68543702baa87a1d8ccaa617e" exitCode=0 Dec 05 10:29:53 crc kubenswrapper[4796]: I1205 10:29:53.917993 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gfhp" event={"ID":"b8701570-2c4b-43e6-9473-a28aae05647f","Type":"ContainerDied","Data":"68e649412bb6463ce6887ae7a0970d602da7f0a68543702baa87a1d8ccaa617e"} Dec 05 10:29:53 crc kubenswrapper[4796]: I1205 10:29:53.920669 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrz7k" event={"ID":"818eb476-c48b-40fc-9dae-e3d7af8f8f25","Type":"ContainerStarted","Data":"91115a863cd67bb7f7e431046ad667b286c91dbeec10193b9f71fc839ce40dc3"} Dec 05 10:29:53 crc kubenswrapper[4796]: I1205 10:29:53.922535 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gln54" event={"ID":"703b0763-b2db-4507-a779-65789d6cba65","Type":"ContainerStarted","Data":"0b845ec6f82423168a99db242810c4f7ffb1ae9b605734727bf840f914cf72d7"} Dec 05 10:29:53 crc kubenswrapper[4796]: I1205 10:29:53.924861 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snx4m" event={"ID":"49519f1b-a5e0-4d0f-bf1d-d6927a8f0957","Type":"ContainerStarted","Data":"3ff3a937cabf9836c0a3e2de56134418d16bca907552095d3d9f295e02a04370"} Dec 05 10:29:53 crc kubenswrapper[4796]: I1205 10:29:53.935486 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8gj86" podStartSLOduration=4.915898061 podStartE2EDuration="16.935477039s" podCreationTimestamp="2025-12-05 10:29:37 +0000 UTC" firstStartedPulling="2025-12-05 10:29:41.35761105 +0000 UTC m=+127.645716563" lastFinishedPulling="2025-12-05 10:29:53.377190028 +0000 UTC m=+139.665295541" observedRunningTime="2025-12-05 10:29:53.93116749 +0000 UTC m=+140.219273003" watchObservedRunningTime="2025-12-05 10:29:53.935477039 +0000 UTC m=+140.223582552" Dec 05 10:29:53 crc kubenswrapper[4796]: I1205 10:29:53.946185 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lrz7k" podStartSLOduration=2.338758744 podStartE2EDuration="18.946176767s" podCreationTimestamp="2025-12-05 10:29:35 +0000 UTC" firstStartedPulling="2025-12-05 10:29:36.760515549 +0000 UTC m=+123.048621063" lastFinishedPulling="2025-12-05 10:29:53.367933572 +0000 UTC m=+139.656039086" observedRunningTime="2025-12-05 10:29:53.943440034 +0000 UTC m=+140.231545548" watchObservedRunningTime="2025-12-05 10:29:53.946176767 +0000 UTC m=+140.234282280" Dec 05 10:29:53 crc kubenswrapper[4796]: I1205 10:29:53.974060 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zptnp" podStartSLOduration=2.221649631 podStartE2EDuration="19.974045292s" podCreationTimestamp="2025-12-05 10:29:34 +0000 UTC" firstStartedPulling="2025-12-05 10:29:35.707569841 +0000 UTC m=+121.995675354" lastFinishedPulling="2025-12-05 10:29:53.459965501 +0000 UTC m=+139.748071015" observedRunningTime="2025-12-05 10:29:53.971208721 +0000 UTC m=+140.259314234" watchObservedRunningTime="2025-12-05 10:29:53.974045292 +0000 UTC m=+140.262150805" Dec 05 10:29:53 crc kubenswrapper[4796]: I1205 10:29:53.990414 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gln54" podStartSLOduration=2.368510529 podStartE2EDuration="18.990396792s" podCreationTimestamp="2025-12-05 10:29:35 +0000 UTC" firstStartedPulling="2025-12-05 10:29:36.7632454 +0000 UTC m=+123.051350914" lastFinishedPulling="2025-12-05 10:29:53.385131663 +0000 UTC m=+139.673237177" observedRunningTime="2025-12-05 10:29:53.9885761 +0000 UTC m=+140.276681613" watchObservedRunningTime="2025-12-05 10:29:53.990396792 +0000 UTC m=+140.278502305" Dec 05 10:29:54 crc kubenswrapper[4796]: I1205 10:29:54.004547 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-snx4m" podStartSLOduration=2.347919605 podStartE2EDuration="20.004536094s" podCreationTimestamp="2025-12-05 10:29:34 +0000 UTC" firstStartedPulling="2025-12-05 10:29:35.703661937 +0000 UTC m=+121.991767450" lastFinishedPulling="2025-12-05 10:29:53.360278426 +0000 UTC m=+139.648383939" observedRunningTime="2025-12-05 10:29:54.003770375 +0000 UTC m=+140.291875888" watchObservedRunningTime="2025-12-05 10:29:54.004536094 +0000 UTC m=+140.292641607" Dec 05 10:29:54 crc kubenswrapper[4796]: I1205 10:29:54.201944 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 10:29:54 crc kubenswrapper[4796]: I1205 10:29:54.383334 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a19e28bb-9b20-4d56-a8fa-4c6bf324f873-kube-api-access\") pod \"a19e28bb-9b20-4d56-a8fa-4c6bf324f873\" (UID: \"a19e28bb-9b20-4d56-a8fa-4c6bf324f873\") " Dec 05 10:29:54 crc kubenswrapper[4796]: I1205 10:29:54.383435 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a19e28bb-9b20-4d56-a8fa-4c6bf324f873-kubelet-dir\") pod \"a19e28bb-9b20-4d56-a8fa-4c6bf324f873\" (UID: \"a19e28bb-9b20-4d56-a8fa-4c6bf324f873\") " Dec 05 10:29:54 crc kubenswrapper[4796]: I1205 10:29:54.383579 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a19e28bb-9b20-4d56-a8fa-4c6bf324f873-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a19e28bb-9b20-4d56-a8fa-4c6bf324f873" (UID: "a19e28bb-9b20-4d56-a8fa-4c6bf324f873"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:29:54 crc kubenswrapper[4796]: I1205 10:29:54.383741 4796 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a19e28bb-9b20-4d56-a8fa-4c6bf324f873-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 05 10:29:54 crc kubenswrapper[4796]: I1205 10:29:54.389906 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a19e28bb-9b20-4d56-a8fa-4c6bf324f873-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a19e28bb-9b20-4d56-a8fa-4c6bf324f873" (UID: "a19e28bb-9b20-4d56-a8fa-4c6bf324f873"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:29:54 crc kubenswrapper[4796]: I1205 10:29:54.484882 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a19e28bb-9b20-4d56-a8fa-4c6bf324f873-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 10:29:54 crc kubenswrapper[4796]: I1205 10:29:54.931042 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gfhp" event={"ID":"b8701570-2c4b-43e6-9473-a28aae05647f","Type":"ContainerStarted","Data":"141d729bca906597086e996d59302eae4bb3c159da198c622d58ae5d5ee80a95"} Dec 05 10:29:54 crc kubenswrapper[4796]: I1205 10:29:54.932239 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a19e28bb-9b20-4d56-a8fa-4c6bf324f873","Type":"ContainerDied","Data":"5d04bbcfd4f014471a72a2c3b55239c6d2ffcaa4bb088df66e4edf17a8a394b8"} Dec 05 10:29:54 crc kubenswrapper[4796]: I1205 10:29:54.932258 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d04bbcfd4f014471a72a2c3b55239c6d2ffcaa4bb088df66e4edf17a8a394b8" Dec 05 10:29:54 crc kubenswrapper[4796]: I1205 10:29:54.932309 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 05 10:29:54 crc kubenswrapper[4796]: I1205 10:29:54.951823 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9gfhp" podStartSLOduration=14.449301492 podStartE2EDuration="16.951799203s" podCreationTimestamp="2025-12-05 10:29:38 +0000 UTC" firstStartedPulling="2025-12-05 10:29:51.869119262 +0000 UTC m=+138.157224775" lastFinishedPulling="2025-12-05 10:29:54.371616974 +0000 UTC m=+140.659722486" observedRunningTime="2025-12-05 10:29:54.950219926 +0000 UTC m=+141.238325438" watchObservedRunningTime="2025-12-05 10:29:54.951799203 +0000 UTC m=+141.239904716" Dec 05 10:29:55 crc kubenswrapper[4796]: I1205 10:29:55.020669 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zptnp" Dec 05 10:29:55 crc kubenswrapper[4796]: I1205 10:29:55.020723 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zptnp" Dec 05 10:29:55 crc kubenswrapper[4796]: I1205 10:29:55.192717 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-snx4m" Dec 05 10:29:55 crc kubenswrapper[4796]: I1205 10:29:55.192761 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-snx4m" Dec 05 10:29:55 crc kubenswrapper[4796]: I1205 10:29:55.220747 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-snx4m" Dec 05 10:29:55 crc kubenswrapper[4796]: I1205 10:29:55.220828 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zptnp" Dec 05 10:29:55 crc kubenswrapper[4796]: I1205 10:29:55.369111 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gln54" Dec 05 10:29:55 crc kubenswrapper[4796]: I1205 10:29:55.369339 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gln54" Dec 05 10:29:55 crc kubenswrapper[4796]: I1205 10:29:55.403218 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gln54" Dec 05 10:29:55 crc kubenswrapper[4796]: I1205 10:29:55.426200 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:29:55 crc kubenswrapper[4796]: I1205 10:29:55.582259 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lrz7k" Dec 05 10:29:55 crc kubenswrapper[4796]: I1205 10:29:55.582303 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lrz7k" Dec 05 10:29:55 crc kubenswrapper[4796]: I1205 10:29:55.610011 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lrz7k" Dec 05 10:29:55 crc kubenswrapper[4796]: I1205 10:29:55.926161 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:29:57 crc kubenswrapper[4796]: I1205 10:29:57.176532 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9xblc" Dec 05 10:29:57 crc kubenswrapper[4796]: I1205 10:29:57.176584 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9xblc" Dec 05 10:29:57 crc kubenswrapper[4796]: I1205 10:29:57.203751 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9xblc" Dec 05 10:29:57 crc kubenswrapper[4796]: I1205 10:29:57.582802 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-plqgk" Dec 05 10:29:57 crc kubenswrapper[4796]: I1205 10:29:57.583066 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-plqgk" Dec 05 10:29:57 crc kubenswrapper[4796]: I1205 10:29:57.610556 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-plqgk" Dec 05 10:29:57 crc kubenswrapper[4796]: I1205 10:29:57.972591 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9xblc" Dec 05 10:29:57 crc kubenswrapper[4796]: I1205 10:29:57.973986 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-plqgk" Dec 05 10:29:58 crc kubenswrapper[4796]: I1205 10:29:58.240256 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8gj86" Dec 05 10:29:58 crc kubenswrapper[4796]: I1205 10:29:58.240787 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8gj86" Dec 05 10:29:58 crc kubenswrapper[4796]: I1205 10:29:58.268900 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8gj86" Dec 05 10:29:58 crc kubenswrapper[4796]: I1205 10:29:58.575193 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9gfhp" Dec 05 10:29:58 crc kubenswrapper[4796]: I1205 10:29:58.575481 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9gfhp" Dec 05 10:29:58 crc kubenswrapper[4796]: I1205 10:29:58.980392 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8gj86" Dec 05 10:29:59 crc kubenswrapper[4796]: I1205 10:29:59.045052 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:29:59 crc kubenswrapper[4796]: I1205 10:29:59.045142 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:29:59 crc kubenswrapper[4796]: I1205 10:29:59.045188 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:29:59 crc kubenswrapper[4796]: I1205 10:29:59.045219 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:29:59 crc kubenswrapper[4796]: I1205 10:29:59.046649 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 05 10:29:59 crc kubenswrapper[4796]: I1205 10:29:59.047152 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 05 10:29:59 crc kubenswrapper[4796]: I1205 10:29:59.047297 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 05 10:29:59 crc kubenswrapper[4796]: I1205 10:29:59.056551 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 05 10:29:59 crc kubenswrapper[4796]: I1205 10:29:59.056648 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:29:59 crc kubenswrapper[4796]: I1205 10:29:59.061511 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:29:59 crc kubenswrapper[4796]: I1205 10:29:59.070150 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:29:59 crc kubenswrapper[4796]: I1205 10:29:59.070168 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:29:59 crc kubenswrapper[4796]: I1205 10:29:59.340526 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:29:59 crc kubenswrapper[4796]: I1205 10:29:59.346651 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 05 10:29:59 crc kubenswrapper[4796]: I1205 10:29:59.350986 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 05 10:29:59 crc kubenswrapper[4796]: I1205 10:29:59.603640 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9gfhp" podUID="b8701570-2c4b-43e6-9473-a28aae05647f" containerName="registry-server" probeResult="failure" output=< Dec 05 10:29:59 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Dec 05 10:29:59 crc kubenswrapper[4796]: > Dec 05 10:29:59 crc kubenswrapper[4796]: W1205 10:29:59.718986 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-8b12fb537d518268ec2a7875a005d11e62efc1e8bf7803cf087ef3f4b332b42b WatchSource:0}: Error finding container 8b12fb537d518268ec2a7875a005d11e62efc1e8bf7803cf087ef3f4b332b42b: Status 404 returned error can't find the container with id 8b12fb537d518268ec2a7875a005d11e62efc1e8bf7803cf087ef3f4b332b42b Dec 05 10:29:59 crc kubenswrapper[4796]: I1205 10:29:59.955549 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"80ea4381a4cbcd918a44cf04a464b5dd572dfcb336f0a63200c937a432b0b7c9"} Dec 05 10:29:59 crc kubenswrapper[4796]: I1205 10:29:59.955594 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8fd3b9f574412beaa532fdf5e690c716abc39bca71cf4529e38239aca5d2124e"} Dec 05 10:29:59 crc kubenswrapper[4796]: I1205 10:29:59.957473 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"009bc6cf243f282282f0098fc1d01c87cc7e611b9556ce8c3a785e0ee4cc968d"} Dec 05 10:29:59 crc kubenswrapper[4796]: I1205 10:29:59.957519 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9865a9c5f85be223fd27c5ade1d9dd2f556c50fcbb58f40b2ac3ede78ee5c0db"} Dec 05 10:29:59 crc kubenswrapper[4796]: I1205 10:29:59.960393 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b63658cafbd1bfa69346a304b8791eada7e7b1ceef8a9db7906276d210e99c30"} Dec 05 10:29:59 crc kubenswrapper[4796]: I1205 10:29:59.960423 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8b12fb537d518268ec2a7875a005d11e62efc1e8bf7803cf087ef3f4b332b42b"} Dec 05 10:29:59 crc kubenswrapper[4796]: I1205 10:29:59.960727 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:30:00 crc kubenswrapper[4796]: I1205 10:30:00.125276 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415510-kdvgw"] Dec 05 10:30:00 crc kubenswrapper[4796]: E1205 10:30:00.125847 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19e28bb-9b20-4d56-a8fa-4c6bf324f873" containerName="pruner" Dec 05 10:30:00 crc kubenswrapper[4796]: I1205 10:30:00.125859 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19e28bb-9b20-4d56-a8fa-4c6bf324f873" containerName="pruner" Dec 05 10:30:00 crc kubenswrapper[4796]: I1205 10:30:00.125957 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19e28bb-9b20-4d56-a8fa-4c6bf324f873" containerName="pruner" Dec 05 10:30:00 crc kubenswrapper[4796]: I1205 10:30:00.126277 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415510-kdvgw" Dec 05 10:30:00 crc kubenswrapper[4796]: I1205 10:30:00.128112 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 10:30:00 crc kubenswrapper[4796]: I1205 10:30:00.128542 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 10:30:00 crc kubenswrapper[4796]: I1205 10:30:00.134853 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415510-kdvgw"] Dec 05 10:30:00 crc kubenswrapper[4796]: I1205 10:30:00.259544 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ea62ad1-69f6-43c0-a663-293a0346277c-config-volume\") pod \"collect-profiles-29415510-kdvgw\" (UID: \"7ea62ad1-69f6-43c0-a663-293a0346277c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415510-kdvgw" Dec 05 10:30:00 crc kubenswrapper[4796]: I1205 10:30:00.259651 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pffz\" (UniqueName: \"kubernetes.io/projected/7ea62ad1-69f6-43c0-a663-293a0346277c-kube-api-access-4pffz\") pod \"collect-profiles-29415510-kdvgw\" (UID: \"7ea62ad1-69f6-43c0-a663-293a0346277c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415510-kdvgw" Dec 05 10:30:00 crc kubenswrapper[4796]: I1205 10:30:00.259723 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ea62ad1-69f6-43c0-a663-293a0346277c-secret-volume\") pod \"collect-profiles-29415510-kdvgw\" (UID: \"7ea62ad1-69f6-43c0-a663-293a0346277c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415510-kdvgw" Dec 05 10:30:00 crc kubenswrapper[4796]: I1205 10:30:00.361288 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ea62ad1-69f6-43c0-a663-293a0346277c-config-volume\") pod \"collect-profiles-29415510-kdvgw\" (UID: \"7ea62ad1-69f6-43c0-a663-293a0346277c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415510-kdvgw" Dec 05 10:30:00 crc kubenswrapper[4796]: I1205 10:30:00.361368 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pffz\" (UniqueName: \"kubernetes.io/projected/7ea62ad1-69f6-43c0-a663-293a0346277c-kube-api-access-4pffz\") pod \"collect-profiles-29415510-kdvgw\" (UID: \"7ea62ad1-69f6-43c0-a663-293a0346277c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415510-kdvgw" Dec 05 10:30:00 crc kubenswrapper[4796]: I1205 10:30:00.361401 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ea62ad1-69f6-43c0-a663-293a0346277c-secret-volume\") pod \"collect-profiles-29415510-kdvgw\" (UID: \"7ea62ad1-69f6-43c0-a663-293a0346277c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415510-kdvgw" Dec 05 10:30:00 crc kubenswrapper[4796]: I1205 10:30:00.362126 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ea62ad1-69f6-43c0-a663-293a0346277c-config-volume\") pod \"collect-profiles-29415510-kdvgw\" (UID: \"7ea62ad1-69f6-43c0-a663-293a0346277c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415510-kdvgw" Dec 05 10:30:00 crc kubenswrapper[4796]: I1205 10:30:00.365825 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ea62ad1-69f6-43c0-a663-293a0346277c-secret-volume\") pod \"collect-profiles-29415510-kdvgw\" (UID: \"7ea62ad1-69f6-43c0-a663-293a0346277c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415510-kdvgw" Dec 05 10:30:00 crc kubenswrapper[4796]: I1205 10:30:00.376842 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pffz\" (UniqueName: \"kubernetes.io/projected/7ea62ad1-69f6-43c0-a663-293a0346277c-kube-api-access-4pffz\") pod \"collect-profiles-29415510-kdvgw\" (UID: \"7ea62ad1-69f6-43c0-a663-293a0346277c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415510-kdvgw" Dec 05 10:30:00 crc kubenswrapper[4796]: I1205 10:30:00.438999 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415510-kdvgw" Dec 05 10:30:00 crc kubenswrapper[4796]: I1205 10:30:00.789899 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415510-kdvgw"] Dec 05 10:30:00 crc kubenswrapper[4796]: I1205 10:30:00.965587 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415510-kdvgw" event={"ID":"7ea62ad1-69f6-43c0-a663-293a0346277c","Type":"ContainerStarted","Data":"62a2978bab674f67dc7cc4e0692d055d45c445ec858d6c2ad79c884b4e06289d"} Dec 05 10:30:00 crc kubenswrapper[4796]: I1205 10:30:00.965622 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415510-kdvgw" event={"ID":"7ea62ad1-69f6-43c0-a663-293a0346277c","Type":"ContainerStarted","Data":"bf410ee106d2300cdd950a37a48e3a52fc04628aab4371cb6bd348c379c4ab33"} Dec 05 10:30:01 crc kubenswrapper[4796]: I1205 10:30:01.970647 4796 generic.go:334] "Generic (PLEG): container finished" podID="7ea62ad1-69f6-43c0-a663-293a0346277c" containerID="62a2978bab674f67dc7cc4e0692d055d45c445ec858d6c2ad79c884b4e06289d" exitCode=0 Dec 05 10:30:01 crc kubenswrapper[4796]: I1205 10:30:01.970720 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415510-kdvgw" event={"ID":"7ea62ad1-69f6-43c0-a663-293a0346277c","Type":"ContainerDied","Data":"62a2978bab674f67dc7cc4e0692d055d45c445ec858d6c2ad79c884b4e06289d"} Dec 05 10:30:02 crc kubenswrapper[4796]: I1205 10:30:02.244150 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-plqgk"] Dec 05 10:30:02 crc kubenswrapper[4796]: I1205 10:30:02.244327 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-plqgk" podUID="873436a9-325d-4ea9-9198-e34642a13a6f" containerName="registry-server" containerID="cri-o://3c75d1ad6b86181d82a732bf74e1fd35495d970222cdc67196a7a6bec408e19e" gracePeriod=2 Dec 05 10:30:02 crc kubenswrapper[4796]: I1205 10:30:02.642127 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plqgk" Dec 05 10:30:02 crc kubenswrapper[4796]: I1205 10:30:02.793762 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/873436a9-325d-4ea9-9198-e34642a13a6f-utilities\") pod \"873436a9-325d-4ea9-9198-e34642a13a6f\" (UID: \"873436a9-325d-4ea9-9198-e34642a13a6f\") " Dec 05 10:30:02 crc kubenswrapper[4796]: I1205 10:30:02.793809 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttzr5\" (UniqueName: \"kubernetes.io/projected/873436a9-325d-4ea9-9198-e34642a13a6f-kube-api-access-ttzr5\") pod \"873436a9-325d-4ea9-9198-e34642a13a6f\" (UID: \"873436a9-325d-4ea9-9198-e34642a13a6f\") " Dec 05 10:30:02 crc kubenswrapper[4796]: I1205 10:30:02.793897 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/873436a9-325d-4ea9-9198-e34642a13a6f-catalog-content\") pod \"873436a9-325d-4ea9-9198-e34642a13a6f\" (UID: \"873436a9-325d-4ea9-9198-e34642a13a6f\") " Dec 05 10:30:02 crc kubenswrapper[4796]: I1205 10:30:02.794569 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/873436a9-325d-4ea9-9198-e34642a13a6f-utilities" (OuterVolumeSpecName: "utilities") pod "873436a9-325d-4ea9-9198-e34642a13a6f" (UID: "873436a9-325d-4ea9-9198-e34642a13a6f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:30:02 crc kubenswrapper[4796]: I1205 10:30:02.798771 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/873436a9-325d-4ea9-9198-e34642a13a6f-kube-api-access-ttzr5" (OuterVolumeSpecName: "kube-api-access-ttzr5") pod "873436a9-325d-4ea9-9198-e34642a13a6f" (UID: "873436a9-325d-4ea9-9198-e34642a13a6f"). InnerVolumeSpecName "kube-api-access-ttzr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:30:02 crc kubenswrapper[4796]: I1205 10:30:02.808225 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/873436a9-325d-4ea9-9198-e34642a13a6f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "873436a9-325d-4ea9-9198-e34642a13a6f" (UID: "873436a9-325d-4ea9-9198-e34642a13a6f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:30:02 crc kubenswrapper[4796]: I1205 10:30:02.895308 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/873436a9-325d-4ea9-9198-e34642a13a6f-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:02 crc kubenswrapper[4796]: I1205 10:30:02.895338 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttzr5\" (UniqueName: \"kubernetes.io/projected/873436a9-325d-4ea9-9198-e34642a13a6f-kube-api-access-ttzr5\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:02 crc kubenswrapper[4796]: I1205 10:30:02.895349 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/873436a9-325d-4ea9-9198-e34642a13a6f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:02 crc kubenswrapper[4796]: I1205 10:30:02.976628 4796 generic.go:334] "Generic (PLEG): container finished" podID="873436a9-325d-4ea9-9198-e34642a13a6f" containerID="3c75d1ad6b86181d82a732bf74e1fd35495d970222cdc67196a7a6bec408e19e" exitCode=0 Dec 05 10:30:02 crc kubenswrapper[4796]: I1205 10:30:02.976675 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-plqgk" Dec 05 10:30:02 crc kubenswrapper[4796]: I1205 10:30:02.976706 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plqgk" event={"ID":"873436a9-325d-4ea9-9198-e34642a13a6f","Type":"ContainerDied","Data":"3c75d1ad6b86181d82a732bf74e1fd35495d970222cdc67196a7a6bec408e19e"} Dec 05 10:30:02 crc kubenswrapper[4796]: I1205 10:30:02.976748 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-plqgk" event={"ID":"873436a9-325d-4ea9-9198-e34642a13a6f","Type":"ContainerDied","Data":"c9cf019f50042151eef3946f664d3c9bbee22c215ca44bf909fae540861b7a0e"} Dec 05 10:30:02 crc kubenswrapper[4796]: I1205 10:30:02.976769 4796 scope.go:117] "RemoveContainer" containerID="3c75d1ad6b86181d82a732bf74e1fd35495d970222cdc67196a7a6bec408e19e" Dec 05 10:30:02 crc kubenswrapper[4796]: I1205 10:30:02.996113 4796 scope.go:117] "RemoveContainer" containerID="22d8d888f26dcaee47db68fffb5c5e633c28d3f11578de38987d9bd2549a177d" Dec 05 10:30:03 crc kubenswrapper[4796]: I1205 10:30:03.005650 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-plqgk"] Dec 05 10:30:03 crc kubenswrapper[4796]: I1205 10:30:03.007832 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-plqgk"] Dec 05 10:30:03 crc kubenswrapper[4796]: I1205 10:30:03.018831 4796 scope.go:117] "RemoveContainer" containerID="916d08a359a31691b31308f5c7ac836f77973a4ba8072e3e2d6c3db9f1e0e0f2" Dec 05 10:30:03 crc kubenswrapper[4796]: I1205 10:30:03.029152 4796 scope.go:117] "RemoveContainer" containerID="3c75d1ad6b86181d82a732bf74e1fd35495d970222cdc67196a7a6bec408e19e" Dec 05 10:30:03 crc kubenswrapper[4796]: E1205 10:30:03.029475 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c75d1ad6b86181d82a732bf74e1fd35495d970222cdc67196a7a6bec408e19e\": container with ID starting with 3c75d1ad6b86181d82a732bf74e1fd35495d970222cdc67196a7a6bec408e19e not found: ID does not exist" containerID="3c75d1ad6b86181d82a732bf74e1fd35495d970222cdc67196a7a6bec408e19e" Dec 05 10:30:03 crc kubenswrapper[4796]: I1205 10:30:03.029517 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c75d1ad6b86181d82a732bf74e1fd35495d970222cdc67196a7a6bec408e19e"} err="failed to get container status \"3c75d1ad6b86181d82a732bf74e1fd35495d970222cdc67196a7a6bec408e19e\": rpc error: code = NotFound desc = could not find container \"3c75d1ad6b86181d82a732bf74e1fd35495d970222cdc67196a7a6bec408e19e\": container with ID starting with 3c75d1ad6b86181d82a732bf74e1fd35495d970222cdc67196a7a6bec408e19e not found: ID does not exist" Dec 05 10:30:03 crc kubenswrapper[4796]: I1205 10:30:03.029566 4796 scope.go:117] "RemoveContainer" containerID="22d8d888f26dcaee47db68fffb5c5e633c28d3f11578de38987d9bd2549a177d" Dec 05 10:30:03 crc kubenswrapper[4796]: E1205 10:30:03.029901 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22d8d888f26dcaee47db68fffb5c5e633c28d3f11578de38987d9bd2549a177d\": container with ID starting with 22d8d888f26dcaee47db68fffb5c5e633c28d3f11578de38987d9bd2549a177d not found: ID does not exist" containerID="22d8d888f26dcaee47db68fffb5c5e633c28d3f11578de38987d9bd2549a177d" Dec 05 10:30:03 crc kubenswrapper[4796]: I1205 10:30:03.029925 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22d8d888f26dcaee47db68fffb5c5e633c28d3f11578de38987d9bd2549a177d"} err="failed to get container status \"22d8d888f26dcaee47db68fffb5c5e633c28d3f11578de38987d9bd2549a177d\": rpc error: code = NotFound desc = could not find container \"22d8d888f26dcaee47db68fffb5c5e633c28d3f11578de38987d9bd2549a177d\": container with ID starting with 22d8d888f26dcaee47db68fffb5c5e633c28d3f11578de38987d9bd2549a177d not found: ID does not exist" Dec 05 10:30:03 crc kubenswrapper[4796]: I1205 10:30:03.029938 4796 scope.go:117] "RemoveContainer" containerID="916d08a359a31691b31308f5c7ac836f77973a4ba8072e3e2d6c3db9f1e0e0f2" Dec 05 10:30:03 crc kubenswrapper[4796]: E1205 10:30:03.030190 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"916d08a359a31691b31308f5c7ac836f77973a4ba8072e3e2d6c3db9f1e0e0f2\": container with ID starting with 916d08a359a31691b31308f5c7ac836f77973a4ba8072e3e2d6c3db9f1e0e0f2 not found: ID does not exist" containerID="916d08a359a31691b31308f5c7ac836f77973a4ba8072e3e2d6c3db9f1e0e0f2" Dec 05 10:30:03 crc kubenswrapper[4796]: I1205 10:30:03.030212 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"916d08a359a31691b31308f5c7ac836f77973a4ba8072e3e2d6c3db9f1e0e0f2"} err="failed to get container status \"916d08a359a31691b31308f5c7ac836f77973a4ba8072e3e2d6c3db9f1e0e0f2\": rpc error: code = NotFound desc = could not find container \"916d08a359a31691b31308f5c7ac836f77973a4ba8072e3e2d6c3db9f1e0e0f2\": container with ID starting with 916d08a359a31691b31308f5c7ac836f77973a4ba8072e3e2d6c3db9f1e0e0f2 not found: ID does not exist" Dec 05 10:30:03 crc kubenswrapper[4796]: I1205 10:30:03.218120 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415510-kdvgw" Dec 05 10:30:03 crc kubenswrapper[4796]: I1205 10:30:03.400939 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ea62ad1-69f6-43c0-a663-293a0346277c-config-volume\") pod \"7ea62ad1-69f6-43c0-a663-293a0346277c\" (UID: \"7ea62ad1-69f6-43c0-a663-293a0346277c\") " Dec 05 10:30:03 crc kubenswrapper[4796]: I1205 10:30:03.400996 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ea62ad1-69f6-43c0-a663-293a0346277c-secret-volume\") pod \"7ea62ad1-69f6-43c0-a663-293a0346277c\" (UID: \"7ea62ad1-69f6-43c0-a663-293a0346277c\") " Dec 05 10:30:03 crc kubenswrapper[4796]: I1205 10:30:03.401032 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pffz\" (UniqueName: \"kubernetes.io/projected/7ea62ad1-69f6-43c0-a663-293a0346277c-kube-api-access-4pffz\") pod \"7ea62ad1-69f6-43c0-a663-293a0346277c\" (UID: \"7ea62ad1-69f6-43c0-a663-293a0346277c\") " Dec 05 10:30:03 crc kubenswrapper[4796]: I1205 10:30:03.401567 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ea62ad1-69f6-43c0-a663-293a0346277c-config-volume" (OuterVolumeSpecName: "config-volume") pod "7ea62ad1-69f6-43c0-a663-293a0346277c" (UID: "7ea62ad1-69f6-43c0-a663-293a0346277c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:30:03 crc kubenswrapper[4796]: I1205 10:30:03.407568 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ea62ad1-69f6-43c0-a663-293a0346277c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7ea62ad1-69f6-43c0-a663-293a0346277c" (UID: "7ea62ad1-69f6-43c0-a663-293a0346277c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:30:03 crc kubenswrapper[4796]: I1205 10:30:03.407617 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ea62ad1-69f6-43c0-a663-293a0346277c-kube-api-access-4pffz" (OuterVolumeSpecName: "kube-api-access-4pffz") pod "7ea62ad1-69f6-43c0-a663-293a0346277c" (UID: "7ea62ad1-69f6-43c0-a663-293a0346277c"). InnerVolumeSpecName "kube-api-access-4pffz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:30:03 crc kubenswrapper[4796]: I1205 10:30:03.502776 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7ea62ad1-69f6-43c0-a663-293a0346277c-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:03 crc kubenswrapper[4796]: I1205 10:30:03.502803 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pffz\" (UniqueName: \"kubernetes.io/projected/7ea62ad1-69f6-43c0-a663-293a0346277c-kube-api-access-4pffz\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:03 crc kubenswrapper[4796]: I1205 10:30:03.502814 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ea62ad1-69f6-43c0-a663-293a0346277c-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:03 crc kubenswrapper[4796]: I1205 10:30:03.982890 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415510-kdvgw" event={"ID":"7ea62ad1-69f6-43c0-a663-293a0346277c","Type":"ContainerDied","Data":"bf410ee106d2300cdd950a37a48e3a52fc04628aab4371cb6bd348c379c4ab33"} Dec 05 10:30:03 crc kubenswrapper[4796]: I1205 10:30:03.982919 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf410ee106d2300cdd950a37a48e3a52fc04628aab4371cb6bd348c379c4ab33" Dec 05 10:30:03 crc kubenswrapper[4796]: I1205 10:30:03.982934 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415510-kdvgw" Dec 05 10:30:04 crc kubenswrapper[4796]: I1205 10:30:04.035413 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="873436a9-325d-4ea9-9198-e34642a13a6f" path="/var/lib/kubelet/pods/873436a9-325d-4ea9-9198-e34642a13a6f/volumes" Dec 05 10:30:05 crc kubenswrapper[4796]: I1205 10:30:05.047198 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zptnp" Dec 05 10:30:05 crc kubenswrapper[4796]: I1205 10:30:05.177093 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:30:05 crc kubenswrapper[4796]: I1205 10:30:05.177139 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:30:05 crc kubenswrapper[4796]: I1205 10:30:05.217465 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-snx4m" Dec 05 10:30:05 crc kubenswrapper[4796]: I1205 10:30:05.395384 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gln54" Dec 05 10:30:05 crc kubenswrapper[4796]: I1205 10:30:05.610297 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lrz7k" Dec 05 10:30:07 crc kubenswrapper[4796]: I1205 10:30:07.070270 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qvpdg"] Dec 05 10:30:07 crc kubenswrapper[4796]: I1205 10:30:07.445151 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lrz7k"] Dec 05 10:30:07 crc kubenswrapper[4796]: I1205 10:30:07.445364 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lrz7k" podUID="818eb476-c48b-40fc-9dae-e3d7af8f8f25" containerName="registry-server" containerID="cri-o://91115a863cd67bb7f7e431046ad667b286c91dbeec10193b9f71fc839ce40dc3" gracePeriod=2 Dec 05 10:30:07 crc kubenswrapper[4796]: I1205 10:30:07.648604 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gln54"] Dec 05 10:30:07 crc kubenswrapper[4796]: I1205 10:30:07.649303 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gln54" podUID="703b0763-b2db-4507-a779-65789d6cba65" containerName="registry-server" containerID="cri-o://0b845ec6f82423168a99db242810c4f7ffb1ae9b605734727bf840f914cf72d7" gracePeriod=2 Dec 05 10:30:07 crc kubenswrapper[4796]: I1205 10:30:07.871480 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lrz7k" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.006908 4796 generic.go:334] "Generic (PLEG): container finished" podID="818eb476-c48b-40fc-9dae-e3d7af8f8f25" containerID="91115a863cd67bb7f7e431046ad667b286c91dbeec10193b9f71fc839ce40dc3" exitCode=0 Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.006964 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lrz7k" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.006973 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrz7k" event={"ID":"818eb476-c48b-40fc-9dae-e3d7af8f8f25","Type":"ContainerDied","Data":"91115a863cd67bb7f7e431046ad667b286c91dbeec10193b9f71fc839ce40dc3"} Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.007004 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrz7k" event={"ID":"818eb476-c48b-40fc-9dae-e3d7af8f8f25","Type":"ContainerDied","Data":"6ad3919a325262ce010f1afb0c45745a43a68aa3a4a9ec0dce94b270fafec061"} Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.007020 4796 scope.go:117] "RemoveContainer" containerID="91115a863cd67bb7f7e431046ad667b286c91dbeec10193b9f71fc839ce40dc3" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.008722 4796 generic.go:334] "Generic (PLEG): container finished" podID="703b0763-b2db-4507-a779-65789d6cba65" containerID="0b845ec6f82423168a99db242810c4f7ffb1ae9b605734727bf840f914cf72d7" exitCode=0 Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.008750 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gln54" event={"ID":"703b0763-b2db-4507-a779-65789d6cba65","Type":"ContainerDied","Data":"0b845ec6f82423168a99db242810c4f7ffb1ae9b605734727bf840f914cf72d7"} Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.022311 4796 scope.go:117] "RemoveContainer" containerID="690a8c248a0c44501a0a799bdcd099d8a0f547b6265dacc155d9cb7d29b4ad8d" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.043738 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gln54" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.043959 4796 scope.go:117] "RemoveContainer" containerID="bfb301cd79344fd373c936dac7f76aa26ec79274d0b80e6313817c410f755167" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.056565 4796 scope.go:117] "RemoveContainer" containerID="91115a863cd67bb7f7e431046ad667b286c91dbeec10193b9f71fc839ce40dc3" Dec 05 10:30:08 crc kubenswrapper[4796]: E1205 10:30:08.056868 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91115a863cd67bb7f7e431046ad667b286c91dbeec10193b9f71fc839ce40dc3\": container with ID starting with 91115a863cd67bb7f7e431046ad667b286c91dbeec10193b9f71fc839ce40dc3 not found: ID does not exist" containerID="91115a863cd67bb7f7e431046ad667b286c91dbeec10193b9f71fc839ce40dc3" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.056908 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91115a863cd67bb7f7e431046ad667b286c91dbeec10193b9f71fc839ce40dc3"} err="failed to get container status \"91115a863cd67bb7f7e431046ad667b286c91dbeec10193b9f71fc839ce40dc3\": rpc error: code = NotFound desc = could not find container \"91115a863cd67bb7f7e431046ad667b286c91dbeec10193b9f71fc839ce40dc3\": container with ID starting with 91115a863cd67bb7f7e431046ad667b286c91dbeec10193b9f71fc839ce40dc3 not found: ID does not exist" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.056933 4796 scope.go:117] "RemoveContainer" containerID="690a8c248a0c44501a0a799bdcd099d8a0f547b6265dacc155d9cb7d29b4ad8d" Dec 05 10:30:08 crc kubenswrapper[4796]: E1205 10:30:08.057183 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"690a8c248a0c44501a0a799bdcd099d8a0f547b6265dacc155d9cb7d29b4ad8d\": container with ID starting with 690a8c248a0c44501a0a799bdcd099d8a0f547b6265dacc155d9cb7d29b4ad8d not found: ID does not exist" containerID="690a8c248a0c44501a0a799bdcd099d8a0f547b6265dacc155d9cb7d29b4ad8d" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.057216 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"690a8c248a0c44501a0a799bdcd099d8a0f547b6265dacc155d9cb7d29b4ad8d"} err="failed to get container status \"690a8c248a0c44501a0a799bdcd099d8a0f547b6265dacc155d9cb7d29b4ad8d\": rpc error: code = NotFound desc = could not find container \"690a8c248a0c44501a0a799bdcd099d8a0f547b6265dacc155d9cb7d29b4ad8d\": container with ID starting with 690a8c248a0c44501a0a799bdcd099d8a0f547b6265dacc155d9cb7d29b4ad8d not found: ID does not exist" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.057236 4796 scope.go:117] "RemoveContainer" containerID="bfb301cd79344fd373c936dac7f76aa26ec79274d0b80e6313817c410f755167" Dec 05 10:30:08 crc kubenswrapper[4796]: E1205 10:30:08.057642 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfb301cd79344fd373c936dac7f76aa26ec79274d0b80e6313817c410f755167\": container with ID starting with bfb301cd79344fd373c936dac7f76aa26ec79274d0b80e6313817c410f755167 not found: ID does not exist" containerID="bfb301cd79344fd373c936dac7f76aa26ec79274d0b80e6313817c410f755167" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.057663 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfb301cd79344fd373c936dac7f76aa26ec79274d0b80e6313817c410f755167"} err="failed to get container status \"bfb301cd79344fd373c936dac7f76aa26ec79274d0b80e6313817c410f755167\": rpc error: code = NotFound desc = could not find container \"bfb301cd79344fd373c936dac7f76aa26ec79274d0b80e6313817c410f755167\": container with ID starting with bfb301cd79344fd373c936dac7f76aa26ec79274d0b80e6313817c410f755167 not found: ID does not exist" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.063244 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csgjr\" (UniqueName: \"kubernetes.io/projected/818eb476-c48b-40fc-9dae-e3d7af8f8f25-kube-api-access-csgjr\") pod \"818eb476-c48b-40fc-9dae-e3d7af8f8f25\" (UID: \"818eb476-c48b-40fc-9dae-e3d7af8f8f25\") " Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.063318 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/818eb476-c48b-40fc-9dae-e3d7af8f8f25-utilities\") pod \"818eb476-c48b-40fc-9dae-e3d7af8f8f25\" (UID: \"818eb476-c48b-40fc-9dae-e3d7af8f8f25\") " Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.063366 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/818eb476-c48b-40fc-9dae-e3d7af8f8f25-catalog-content\") pod \"818eb476-c48b-40fc-9dae-e3d7af8f8f25\" (UID: \"818eb476-c48b-40fc-9dae-e3d7af8f8f25\") " Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.064060 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/818eb476-c48b-40fc-9dae-e3d7af8f8f25-utilities" (OuterVolumeSpecName: "utilities") pod "818eb476-c48b-40fc-9dae-e3d7af8f8f25" (UID: "818eb476-c48b-40fc-9dae-e3d7af8f8f25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.069064 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/818eb476-c48b-40fc-9dae-e3d7af8f8f25-kube-api-access-csgjr" (OuterVolumeSpecName: "kube-api-access-csgjr") pod "818eb476-c48b-40fc-9dae-e3d7af8f8f25" (UID: "818eb476-c48b-40fc-9dae-e3d7af8f8f25"). InnerVolumeSpecName "kube-api-access-csgjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.106062 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/818eb476-c48b-40fc-9dae-e3d7af8f8f25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "818eb476-c48b-40fc-9dae-e3d7af8f8f25" (UID: "818eb476-c48b-40fc-9dae-e3d7af8f8f25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.165036 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2p4m\" (UniqueName: \"kubernetes.io/projected/703b0763-b2db-4507-a779-65789d6cba65-kube-api-access-m2p4m\") pod \"703b0763-b2db-4507-a779-65789d6cba65\" (UID: \"703b0763-b2db-4507-a779-65789d6cba65\") " Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.165143 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/703b0763-b2db-4507-a779-65789d6cba65-catalog-content\") pod \"703b0763-b2db-4507-a779-65789d6cba65\" (UID: \"703b0763-b2db-4507-a779-65789d6cba65\") " Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.165267 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/703b0763-b2db-4507-a779-65789d6cba65-utilities\") pod \"703b0763-b2db-4507-a779-65789d6cba65\" (UID: \"703b0763-b2db-4507-a779-65789d6cba65\") " Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.165549 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csgjr\" (UniqueName: \"kubernetes.io/projected/818eb476-c48b-40fc-9dae-e3d7af8f8f25-kube-api-access-csgjr\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.165565 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/818eb476-c48b-40fc-9dae-e3d7af8f8f25-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.165575 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/818eb476-c48b-40fc-9dae-e3d7af8f8f25-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.165922 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/703b0763-b2db-4507-a779-65789d6cba65-utilities" (OuterVolumeSpecName: "utilities") pod "703b0763-b2db-4507-a779-65789d6cba65" (UID: "703b0763-b2db-4507-a779-65789d6cba65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.170771 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/703b0763-b2db-4507-a779-65789d6cba65-kube-api-access-m2p4m" (OuterVolumeSpecName: "kube-api-access-m2p4m") pod "703b0763-b2db-4507-a779-65789d6cba65" (UID: "703b0763-b2db-4507-a779-65789d6cba65"). InnerVolumeSpecName "kube-api-access-m2p4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.206043 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/703b0763-b2db-4507-a779-65789d6cba65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "703b0763-b2db-4507-a779-65789d6cba65" (UID: "703b0763-b2db-4507-a779-65789d6cba65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.266558 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/703b0763-b2db-4507-a779-65789d6cba65-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.266594 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/703b0763-b2db-4507-a779-65789d6cba65-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.266606 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2p4m\" (UniqueName: \"kubernetes.io/projected/703b0763-b2db-4507-a779-65789d6cba65-kube-api-access-m2p4m\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.333249 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lrz7k"] Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.335628 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lrz7k"] Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.606572 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9gfhp" Dec 05 10:30:08 crc kubenswrapper[4796]: I1205 10:30:08.639371 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9gfhp" Dec 05 10:30:09 crc kubenswrapper[4796]: I1205 10:30:09.014668 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gln54" event={"ID":"703b0763-b2db-4507-a779-65789d6cba65","Type":"ContainerDied","Data":"e6b5bd358b45b653e5222126be9f30bdd7b2cdd974a78ec46d51e9415763fdd3"} Dec 05 10:30:09 crc kubenswrapper[4796]: I1205 10:30:09.014742 4796 scope.go:117] "RemoveContainer" containerID="0b845ec6f82423168a99db242810c4f7ffb1ae9b605734727bf840f914cf72d7" Dec 05 10:30:09 crc kubenswrapper[4796]: I1205 10:30:09.014763 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gln54" Dec 05 10:30:09 crc kubenswrapper[4796]: I1205 10:30:09.034306 4796 scope.go:117] "RemoveContainer" containerID="060527c055556dfec36bd0fc1d53f0004fdf6bcc926a9ab82845d48fbdc9fa3e" Dec 05 10:30:09 crc kubenswrapper[4796]: I1205 10:30:09.041764 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gln54"] Dec 05 10:30:09 crc kubenswrapper[4796]: I1205 10:30:09.045392 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gln54"] Dec 05 10:30:09 crc kubenswrapper[4796]: I1205 10:30:09.066036 4796 scope.go:117] "RemoveContainer" containerID="01272087109f13635e5f69fb0f4d00f63b8c7d3a29f3ca3fb922643a343af958" Dec 05 10:30:10 crc kubenswrapper[4796]: I1205 10:30:10.036139 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="703b0763-b2db-4507-a779-65789d6cba65" path="/var/lib/kubelet/pods/703b0763-b2db-4507-a779-65789d6cba65/volumes" Dec 05 10:30:10 crc kubenswrapper[4796]: I1205 10:30:10.037035 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="818eb476-c48b-40fc-9dae-e3d7af8f8f25" path="/var/lib/kubelet/pods/818eb476-c48b-40fc-9dae-e3d7af8f8f25/volumes" Dec 05 10:30:10 crc kubenswrapper[4796]: I1205 10:30:10.390573 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-blx8z" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.045447 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-j7krr"] Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.045837 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" podUID="a0677773-5515-4b74-9975-75dc72e6a127" containerName="controller-manager" containerID="cri-o://37c1890208b4f8639a3faf14af8bf9444bfdb383a0ff934b221a03db9f5e4cb4" gracePeriod=30 Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.052250 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9gfhp"] Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.052651 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9gfhp" podUID="b8701570-2c4b-43e6-9473-a28aae05647f" containerName="registry-server" containerID="cri-o://141d729bca906597086e996d59302eae4bb3c159da198c622d58ae5d5ee80a95" gracePeriod=2 Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.164285 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh"] Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.164474 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" podUID="c970b9fd-5a2c-43db-a77c-078bcfbe3f6f" containerName="route-controller-manager" containerID="cri-o://e029260f600ee56f8c15414d018797bee09922ec75b8c4db5db6c47feed37497" gracePeriod=30 Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.516084 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gfhp" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.604567 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.611975 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8701570-2c4b-43e6-9473-a28aae05647f-catalog-content\") pod \"b8701570-2c4b-43e6-9473-a28aae05647f\" (UID: \"b8701570-2c4b-43e6-9473-a28aae05647f\") " Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.612094 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8701570-2c4b-43e6-9473-a28aae05647f-utilities\") pod \"b8701570-2c4b-43e6-9473-a28aae05647f\" (UID: \"b8701570-2c4b-43e6-9473-a28aae05647f\") " Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.612119 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bc88\" (UniqueName: \"kubernetes.io/projected/b8701570-2c4b-43e6-9473-a28aae05647f-kube-api-access-6bc88\") pod \"b8701570-2c4b-43e6-9473-a28aae05647f\" (UID: \"b8701570-2c4b-43e6-9473-a28aae05647f\") " Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.612863 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8701570-2c4b-43e6-9473-a28aae05647f-utilities" (OuterVolumeSpecName: "utilities") pod "b8701570-2c4b-43e6-9473-a28aae05647f" (UID: "b8701570-2c4b-43e6-9473-a28aae05647f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.614888 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8701570-2c4b-43e6-9473-a28aae05647f-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.620641 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8701570-2c4b-43e6-9473-a28aae05647f-kube-api-access-6bc88" (OuterVolumeSpecName: "kube-api-access-6bc88") pod "b8701570-2c4b-43e6-9473-a28aae05647f" (UID: "b8701570-2c4b-43e6-9473-a28aae05647f"). InnerVolumeSpecName "kube-api-access-6bc88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.677376 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.695509 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8701570-2c4b-43e6-9473-a28aae05647f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8701570-2c4b-43e6-9473-a28aae05647f" (UID: "b8701570-2c4b-43e6-9473-a28aae05647f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.715729 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-config\") pod \"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f\" (UID: \"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f\") " Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.715809 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-client-ca\") pod \"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f\" (UID: \"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f\") " Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.715893 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6978z\" (UniqueName: \"kubernetes.io/projected/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-kube-api-access-6978z\") pod \"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f\" (UID: \"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f\") " Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.715919 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-serving-cert\") pod \"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f\" (UID: \"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f\") " Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.716174 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8701570-2c4b-43e6-9473-a28aae05647f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.716192 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bc88\" (UniqueName: \"kubernetes.io/projected/b8701570-2c4b-43e6-9473-a28aae05647f-kube-api-access-6bc88\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.716537 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-client-ca" (OuterVolumeSpecName: "client-ca") pod "c970b9fd-5a2c-43db-a77c-078bcfbe3f6f" (UID: "c970b9fd-5a2c-43db-a77c-078bcfbe3f6f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.716565 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-config" (OuterVolumeSpecName: "config") pod "c970b9fd-5a2c-43db-a77c-078bcfbe3f6f" (UID: "c970b9fd-5a2c-43db-a77c-078bcfbe3f6f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.719382 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c970b9fd-5a2c-43db-a77c-078bcfbe3f6f" (UID: "c970b9fd-5a2c-43db-a77c-078bcfbe3f6f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.719709 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-kube-api-access-6978z" (OuterVolumeSpecName: "kube-api-access-6978z") pod "c970b9fd-5a2c-43db-a77c-078bcfbe3f6f" (UID: "c970b9fd-5a2c-43db-a77c-078bcfbe3f6f"). InnerVolumeSpecName "kube-api-access-6978z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.817140 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0677773-5515-4b74-9975-75dc72e6a127-client-ca\") pod \"a0677773-5515-4b74-9975-75dc72e6a127\" (UID: \"a0677773-5515-4b74-9975-75dc72e6a127\") " Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.817270 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0677773-5515-4b74-9975-75dc72e6a127-config\") pod \"a0677773-5515-4b74-9975-75dc72e6a127\" (UID: \"a0677773-5515-4b74-9975-75dc72e6a127\") " Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.817330 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0677773-5515-4b74-9975-75dc72e6a127-serving-cert\") pod \"a0677773-5515-4b74-9975-75dc72e6a127\" (UID: \"a0677773-5515-4b74-9975-75dc72e6a127\") " Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.817378 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jh9f\" (UniqueName: \"kubernetes.io/projected/a0677773-5515-4b74-9975-75dc72e6a127-kube-api-access-6jh9f\") pod \"a0677773-5515-4b74-9975-75dc72e6a127\" (UID: \"a0677773-5515-4b74-9975-75dc72e6a127\") " Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.817461 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0677773-5515-4b74-9975-75dc72e6a127-proxy-ca-bundles\") pod \"a0677773-5515-4b74-9975-75dc72e6a127\" (UID: \"a0677773-5515-4b74-9975-75dc72e6a127\") " Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.817863 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.817884 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6978z\" (UniqueName: \"kubernetes.io/projected/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-kube-api-access-6978z\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.817901 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.817910 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.817994 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0677773-5515-4b74-9975-75dc72e6a127-config" (OuterVolumeSpecName: "config") pod "a0677773-5515-4b74-9975-75dc72e6a127" (UID: "a0677773-5515-4b74-9975-75dc72e6a127"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.818032 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0677773-5515-4b74-9975-75dc72e6a127-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a0677773-5515-4b74-9975-75dc72e6a127" (UID: "a0677773-5515-4b74-9975-75dc72e6a127"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.818157 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0677773-5515-4b74-9975-75dc72e6a127-client-ca" (OuterVolumeSpecName: "client-ca") pod "a0677773-5515-4b74-9975-75dc72e6a127" (UID: "a0677773-5515-4b74-9975-75dc72e6a127"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.819810 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0677773-5515-4b74-9975-75dc72e6a127-kube-api-access-6jh9f" (OuterVolumeSpecName: "kube-api-access-6jh9f") pod "a0677773-5515-4b74-9975-75dc72e6a127" (UID: "a0677773-5515-4b74-9975-75dc72e6a127"). InnerVolumeSpecName "kube-api-access-6jh9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.820702 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0677773-5515-4b74-9975-75dc72e6a127-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a0677773-5515-4b74-9975-75dc72e6a127" (UID: "a0677773-5515-4b74-9975-75dc72e6a127"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.919104 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0677773-5515-4b74-9975-75dc72e6a127-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.919138 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0677773-5515-4b74-9975-75dc72e6a127-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.919150 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jh9f\" (UniqueName: \"kubernetes.io/projected/a0677773-5515-4b74-9975-75dc72e6a127-kube-api-access-6jh9f\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.919161 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0677773-5515-4b74-9975-75dc72e6a127-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:12 crc kubenswrapper[4796]: I1205 10:30:12.919170 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0677773-5515-4b74-9975-75dc72e6a127-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.038630 4796 generic.go:334] "Generic (PLEG): container finished" podID="a0677773-5515-4b74-9975-75dc72e6a127" containerID="37c1890208b4f8639a3faf14af8bf9444bfdb383a0ff934b221a03db9f5e4cb4" exitCode=0 Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.038714 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" event={"ID":"a0677773-5515-4b74-9975-75dc72e6a127","Type":"ContainerDied","Data":"37c1890208b4f8639a3faf14af8bf9444bfdb383a0ff934b221a03db9f5e4cb4"} Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.038735 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" event={"ID":"a0677773-5515-4b74-9975-75dc72e6a127","Type":"ContainerDied","Data":"df40992b222d9822a1a4e4ba6f49890a8f6990c9c859cdeeddfa7bce2f1b0908"} Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.038751 4796 scope.go:117] "RemoveContainer" containerID="37c1890208b4f8639a3faf14af8bf9444bfdb383a0ff934b221a03db9f5e4cb4" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.038823 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-j7krr" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.043026 4796 generic.go:334] "Generic (PLEG): container finished" podID="b8701570-2c4b-43e6-9473-a28aae05647f" containerID="141d729bca906597086e996d59302eae4bb3c159da198c622d58ae5d5ee80a95" exitCode=0 Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.043079 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9gfhp" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.043146 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gfhp" event={"ID":"b8701570-2c4b-43e6-9473-a28aae05647f","Type":"ContainerDied","Data":"141d729bca906597086e996d59302eae4bb3c159da198c622d58ae5d5ee80a95"} Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.043174 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9gfhp" event={"ID":"b8701570-2c4b-43e6-9473-a28aae05647f","Type":"ContainerDied","Data":"1e7124000b083d1aa3817d39b3d0bf40343f64b29fc4bb414f49b9ebc3c4f586"} Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.045270 4796 generic.go:334] "Generic (PLEG): container finished" podID="c970b9fd-5a2c-43db-a77c-078bcfbe3f6f" containerID="e029260f600ee56f8c15414d018797bee09922ec75b8c4db5db6c47feed37497" exitCode=0 Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.045319 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" event={"ID":"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f","Type":"ContainerDied","Data":"e029260f600ee56f8c15414d018797bee09922ec75b8c4db5db6c47feed37497"} Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.045354 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" event={"ID":"c970b9fd-5a2c-43db-a77c-078bcfbe3f6f","Type":"ContainerDied","Data":"8b600d682a70ab8ca556ac1f906874207615743d4baae823a3eae0080ad70c7b"} Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.045410 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.058589 4796 scope.go:117] "RemoveContainer" containerID="37c1890208b4f8639a3faf14af8bf9444bfdb383a0ff934b221a03db9f5e4cb4" Dec 05 10:30:13 crc kubenswrapper[4796]: E1205 10:30:13.060870 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37c1890208b4f8639a3faf14af8bf9444bfdb383a0ff934b221a03db9f5e4cb4\": container with ID starting with 37c1890208b4f8639a3faf14af8bf9444bfdb383a0ff934b221a03db9f5e4cb4 not found: ID does not exist" containerID="37c1890208b4f8639a3faf14af8bf9444bfdb383a0ff934b221a03db9f5e4cb4" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.060933 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37c1890208b4f8639a3faf14af8bf9444bfdb383a0ff934b221a03db9f5e4cb4"} err="failed to get container status \"37c1890208b4f8639a3faf14af8bf9444bfdb383a0ff934b221a03db9f5e4cb4\": rpc error: code = NotFound desc = could not find container \"37c1890208b4f8639a3faf14af8bf9444bfdb383a0ff934b221a03db9f5e4cb4\": container with ID starting with 37c1890208b4f8639a3faf14af8bf9444bfdb383a0ff934b221a03db9f5e4cb4 not found: ID does not exist" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.060961 4796 scope.go:117] "RemoveContainer" containerID="141d729bca906597086e996d59302eae4bb3c159da198c622d58ae5d5ee80a95" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.061703 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-j7krr"] Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.064060 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-j7krr"] Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.086775 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9gfhp"] Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.092003 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9gfhp"] Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.092099 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh"] Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.097512 4796 scope.go:117] "RemoveContainer" containerID="68e649412bb6463ce6887ae7a0970d602da7f0a68543702baa87a1d8ccaa617e" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.098204 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-g6ljh"] Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.114079 4796 scope.go:117] "RemoveContainer" containerID="2557a068df6d53294f28365a7a5f876efe5a059b9f2cf41b4cbc2ce7ab7f9703" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.125883 4796 scope.go:117] "RemoveContainer" containerID="141d729bca906597086e996d59302eae4bb3c159da198c622d58ae5d5ee80a95" Dec 05 10:30:13 crc kubenswrapper[4796]: E1205 10:30:13.126341 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"141d729bca906597086e996d59302eae4bb3c159da198c622d58ae5d5ee80a95\": container with ID starting with 141d729bca906597086e996d59302eae4bb3c159da198c622d58ae5d5ee80a95 not found: ID does not exist" containerID="141d729bca906597086e996d59302eae4bb3c159da198c622d58ae5d5ee80a95" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.126371 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"141d729bca906597086e996d59302eae4bb3c159da198c622d58ae5d5ee80a95"} err="failed to get container status \"141d729bca906597086e996d59302eae4bb3c159da198c622d58ae5d5ee80a95\": rpc error: code = NotFound desc = could not find container \"141d729bca906597086e996d59302eae4bb3c159da198c622d58ae5d5ee80a95\": container with ID starting with 141d729bca906597086e996d59302eae4bb3c159da198c622d58ae5d5ee80a95 not found: ID does not exist" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.126415 4796 scope.go:117] "RemoveContainer" containerID="68e649412bb6463ce6887ae7a0970d602da7f0a68543702baa87a1d8ccaa617e" Dec 05 10:30:13 crc kubenswrapper[4796]: E1205 10:30:13.126882 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68e649412bb6463ce6887ae7a0970d602da7f0a68543702baa87a1d8ccaa617e\": container with ID starting with 68e649412bb6463ce6887ae7a0970d602da7f0a68543702baa87a1d8ccaa617e not found: ID does not exist" containerID="68e649412bb6463ce6887ae7a0970d602da7f0a68543702baa87a1d8ccaa617e" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.126931 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68e649412bb6463ce6887ae7a0970d602da7f0a68543702baa87a1d8ccaa617e"} err="failed to get container status \"68e649412bb6463ce6887ae7a0970d602da7f0a68543702baa87a1d8ccaa617e\": rpc error: code = NotFound desc = could not find container \"68e649412bb6463ce6887ae7a0970d602da7f0a68543702baa87a1d8ccaa617e\": container with ID starting with 68e649412bb6463ce6887ae7a0970d602da7f0a68543702baa87a1d8ccaa617e not found: ID does not exist" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.126979 4796 scope.go:117] "RemoveContainer" containerID="2557a068df6d53294f28365a7a5f876efe5a059b9f2cf41b4cbc2ce7ab7f9703" Dec 05 10:30:13 crc kubenswrapper[4796]: E1205 10:30:13.127393 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2557a068df6d53294f28365a7a5f876efe5a059b9f2cf41b4cbc2ce7ab7f9703\": container with ID starting with 2557a068df6d53294f28365a7a5f876efe5a059b9f2cf41b4cbc2ce7ab7f9703 not found: ID does not exist" containerID="2557a068df6d53294f28365a7a5f876efe5a059b9f2cf41b4cbc2ce7ab7f9703" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.127424 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2557a068df6d53294f28365a7a5f876efe5a059b9f2cf41b4cbc2ce7ab7f9703"} err="failed to get container status \"2557a068df6d53294f28365a7a5f876efe5a059b9f2cf41b4cbc2ce7ab7f9703\": rpc error: code = NotFound desc = could not find container \"2557a068df6d53294f28365a7a5f876efe5a059b9f2cf41b4cbc2ce7ab7f9703\": container with ID starting with 2557a068df6d53294f28365a7a5f876efe5a059b9f2cf41b4cbc2ce7ab7f9703 not found: ID does not exist" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.127468 4796 scope.go:117] "RemoveContainer" containerID="e029260f600ee56f8c15414d018797bee09922ec75b8c4db5db6c47feed37497" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.138094 4796 scope.go:117] "RemoveContainer" containerID="e029260f600ee56f8c15414d018797bee09922ec75b8c4db5db6c47feed37497" Dec 05 10:30:13 crc kubenswrapper[4796]: E1205 10:30:13.138449 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e029260f600ee56f8c15414d018797bee09922ec75b8c4db5db6c47feed37497\": container with ID starting with e029260f600ee56f8c15414d018797bee09922ec75b8c4db5db6c47feed37497 not found: ID does not exist" containerID="e029260f600ee56f8c15414d018797bee09922ec75b8c4db5db6c47feed37497" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.138480 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e029260f600ee56f8c15414d018797bee09922ec75b8c4db5db6c47feed37497"} err="failed to get container status \"e029260f600ee56f8c15414d018797bee09922ec75b8c4db5db6c47feed37497\": rpc error: code = NotFound desc = could not find container \"e029260f600ee56f8c15414d018797bee09922ec75b8c4db5db6c47feed37497\": container with ID starting with e029260f600ee56f8c15414d018797bee09922ec75b8c4db5db6c47feed37497 not found: ID does not exist" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.878779 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6"] Dec 05 10:30:13 crc kubenswrapper[4796]: E1205 10:30:13.879277 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="703b0763-b2db-4507-a779-65789d6cba65" containerName="extract-content" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.879294 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="703b0763-b2db-4507-a779-65789d6cba65" containerName="extract-content" Dec 05 10:30:13 crc kubenswrapper[4796]: E1205 10:30:13.879307 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="703b0763-b2db-4507-a779-65789d6cba65" containerName="registry-server" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.879313 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="703b0763-b2db-4507-a779-65789d6cba65" containerName="registry-server" Dec 05 10:30:13 crc kubenswrapper[4796]: E1205 10:30:13.879320 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="818eb476-c48b-40fc-9dae-e3d7af8f8f25" containerName="registry-server" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.879326 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="818eb476-c48b-40fc-9dae-e3d7af8f8f25" containerName="registry-server" Dec 05 10:30:13 crc kubenswrapper[4796]: E1205 10:30:13.879333 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="818eb476-c48b-40fc-9dae-e3d7af8f8f25" containerName="extract-content" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.879340 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="818eb476-c48b-40fc-9dae-e3d7af8f8f25" containerName="extract-content" Dec 05 10:30:13 crc kubenswrapper[4796]: E1205 10:30:13.879348 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8701570-2c4b-43e6-9473-a28aae05647f" containerName="extract-content" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.879354 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8701570-2c4b-43e6-9473-a28aae05647f" containerName="extract-content" Dec 05 10:30:13 crc kubenswrapper[4796]: E1205 10:30:13.879361 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="873436a9-325d-4ea9-9198-e34642a13a6f" containerName="extract-utilities" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.879367 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="873436a9-325d-4ea9-9198-e34642a13a6f" containerName="extract-utilities" Dec 05 10:30:13 crc kubenswrapper[4796]: E1205 10:30:13.879373 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8701570-2c4b-43e6-9473-a28aae05647f" containerName="extract-utilities" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.879379 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8701570-2c4b-43e6-9473-a28aae05647f" containerName="extract-utilities" Dec 05 10:30:13 crc kubenswrapper[4796]: E1205 10:30:13.879388 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c970b9fd-5a2c-43db-a77c-078bcfbe3f6f" containerName="route-controller-manager" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.879394 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c970b9fd-5a2c-43db-a77c-078bcfbe3f6f" containerName="route-controller-manager" Dec 05 10:30:13 crc kubenswrapper[4796]: E1205 10:30:13.879405 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea62ad1-69f6-43c0-a663-293a0346277c" containerName="collect-profiles" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.879412 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea62ad1-69f6-43c0-a663-293a0346277c" containerName="collect-profiles" Dec 05 10:30:13 crc kubenswrapper[4796]: E1205 10:30:13.879422 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="873436a9-325d-4ea9-9198-e34642a13a6f" containerName="extract-content" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.879434 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="873436a9-325d-4ea9-9198-e34642a13a6f" containerName="extract-content" Dec 05 10:30:13 crc kubenswrapper[4796]: E1205 10:30:13.879440 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="818eb476-c48b-40fc-9dae-e3d7af8f8f25" containerName="extract-utilities" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.879446 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="818eb476-c48b-40fc-9dae-e3d7af8f8f25" containerName="extract-utilities" Dec 05 10:30:13 crc kubenswrapper[4796]: E1205 10:30:13.879451 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="703b0763-b2db-4507-a779-65789d6cba65" containerName="extract-utilities" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.879457 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="703b0763-b2db-4507-a779-65789d6cba65" containerName="extract-utilities" Dec 05 10:30:13 crc kubenswrapper[4796]: E1205 10:30:13.879467 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="873436a9-325d-4ea9-9198-e34642a13a6f" containerName="registry-server" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.879472 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="873436a9-325d-4ea9-9198-e34642a13a6f" containerName="registry-server" Dec 05 10:30:13 crc kubenswrapper[4796]: E1205 10:30:13.879481 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8701570-2c4b-43e6-9473-a28aae05647f" containerName="registry-server" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.879487 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8701570-2c4b-43e6-9473-a28aae05647f" containerName="registry-server" Dec 05 10:30:13 crc kubenswrapper[4796]: E1205 10:30:13.879494 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0677773-5515-4b74-9975-75dc72e6a127" containerName="controller-manager" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.879500 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0677773-5515-4b74-9975-75dc72e6a127" containerName="controller-manager" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.879577 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="703b0763-b2db-4507-a779-65789d6cba65" containerName="registry-server" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.879586 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c970b9fd-5a2c-43db-a77c-078bcfbe3f6f" containerName="route-controller-manager" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.879593 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="818eb476-c48b-40fc-9dae-e3d7af8f8f25" containerName="registry-server" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.879600 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea62ad1-69f6-43c0-a663-293a0346277c" containerName="collect-profiles" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.879607 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8701570-2c4b-43e6-9473-a28aae05647f" containerName="registry-server" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.879615 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="873436a9-325d-4ea9-9198-e34642a13a6f" containerName="registry-server" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.879621 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0677773-5515-4b74-9975-75dc72e6a127" containerName="controller-manager" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.880017 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.881090 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8cbc6d589-8bqtl"] Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.882359 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.882372 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.882398 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.882450 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.882465 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.882470 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.882894 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.884025 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.884076 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.884579 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.884669 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.884751 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.885022 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.889468 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6"] Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.896707 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8cbc6d589-8bqtl"] Dec 05 10:30:13 crc kubenswrapper[4796]: I1205 10:30:13.898278 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.033057 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-client-ca\") pod \"route-controller-manager-6996cb79cf-4pdf6\" (UID: \"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7\") " pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.033145 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/470fbce8-0db2-421c-8653-5b7a35f10465-serving-cert\") pod \"controller-manager-8cbc6d589-8bqtl\" (UID: \"470fbce8-0db2-421c-8653-5b7a35f10465\") " pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.033216 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-serving-cert\") pod \"route-controller-manager-6996cb79cf-4pdf6\" (UID: \"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7\") " pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.033249 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/470fbce8-0db2-421c-8653-5b7a35f10465-config\") pod \"controller-manager-8cbc6d589-8bqtl\" (UID: \"470fbce8-0db2-421c-8653-5b7a35f10465\") " pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.033275 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/470fbce8-0db2-421c-8653-5b7a35f10465-proxy-ca-bundles\") pod \"controller-manager-8cbc6d589-8bqtl\" (UID: \"470fbce8-0db2-421c-8653-5b7a35f10465\") " pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.033303 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdvbp\" (UniqueName: \"kubernetes.io/projected/470fbce8-0db2-421c-8653-5b7a35f10465-kube-api-access-pdvbp\") pod \"controller-manager-8cbc6d589-8bqtl\" (UID: \"470fbce8-0db2-421c-8653-5b7a35f10465\") " pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.033332 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-config\") pod \"route-controller-manager-6996cb79cf-4pdf6\" (UID: \"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7\") " pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.033836 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/470fbce8-0db2-421c-8653-5b7a35f10465-client-ca\") pod \"controller-manager-8cbc6d589-8bqtl\" (UID: \"470fbce8-0db2-421c-8653-5b7a35f10465\") " pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.033876 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlqdg\" (UniqueName: \"kubernetes.io/projected/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-kube-api-access-zlqdg\") pod \"route-controller-manager-6996cb79cf-4pdf6\" (UID: \"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7\") " pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.038112 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0677773-5515-4b74-9975-75dc72e6a127" path="/var/lib/kubelet/pods/a0677773-5515-4b74-9975-75dc72e6a127/volumes" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.038642 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8701570-2c4b-43e6-9473-a28aae05647f" path="/var/lib/kubelet/pods/b8701570-2c4b-43e6-9473-a28aae05647f/volumes" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.039475 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c970b9fd-5a2c-43db-a77c-078bcfbe3f6f" path="/var/lib/kubelet/pods/c970b9fd-5a2c-43db-a77c-078bcfbe3f6f/volumes" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.134442 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlqdg\" (UniqueName: \"kubernetes.io/projected/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-kube-api-access-zlqdg\") pod \"route-controller-manager-6996cb79cf-4pdf6\" (UID: \"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7\") " pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.134510 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-client-ca\") pod \"route-controller-manager-6996cb79cf-4pdf6\" (UID: \"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7\") " pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.134560 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/470fbce8-0db2-421c-8653-5b7a35f10465-serving-cert\") pod \"controller-manager-8cbc6d589-8bqtl\" (UID: \"470fbce8-0db2-421c-8653-5b7a35f10465\") " pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.134590 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-serving-cert\") pod \"route-controller-manager-6996cb79cf-4pdf6\" (UID: \"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7\") " pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.134609 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/470fbce8-0db2-421c-8653-5b7a35f10465-config\") pod \"controller-manager-8cbc6d589-8bqtl\" (UID: \"470fbce8-0db2-421c-8653-5b7a35f10465\") " pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.134633 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/470fbce8-0db2-421c-8653-5b7a35f10465-proxy-ca-bundles\") pod \"controller-manager-8cbc6d589-8bqtl\" (UID: \"470fbce8-0db2-421c-8653-5b7a35f10465\") " pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.134659 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdvbp\" (UniqueName: \"kubernetes.io/projected/470fbce8-0db2-421c-8653-5b7a35f10465-kube-api-access-pdvbp\") pod \"controller-manager-8cbc6d589-8bqtl\" (UID: \"470fbce8-0db2-421c-8653-5b7a35f10465\") " pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.135034 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-config\") pod \"route-controller-manager-6996cb79cf-4pdf6\" (UID: \"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7\") " pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.135069 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/470fbce8-0db2-421c-8653-5b7a35f10465-client-ca\") pod \"controller-manager-8cbc6d589-8bqtl\" (UID: \"470fbce8-0db2-421c-8653-5b7a35f10465\") " pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.135612 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-client-ca\") pod \"route-controller-manager-6996cb79cf-4pdf6\" (UID: \"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7\") " pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.135937 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-config\") pod \"route-controller-manager-6996cb79cf-4pdf6\" (UID: \"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7\") " pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.135972 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/470fbce8-0db2-421c-8653-5b7a35f10465-client-ca\") pod \"controller-manager-8cbc6d589-8bqtl\" (UID: \"470fbce8-0db2-421c-8653-5b7a35f10465\") " pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.136071 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/470fbce8-0db2-421c-8653-5b7a35f10465-config\") pod \"controller-manager-8cbc6d589-8bqtl\" (UID: \"470fbce8-0db2-421c-8653-5b7a35f10465\") " pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.136548 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/470fbce8-0db2-421c-8653-5b7a35f10465-proxy-ca-bundles\") pod \"controller-manager-8cbc6d589-8bqtl\" (UID: \"470fbce8-0db2-421c-8653-5b7a35f10465\") " pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.141405 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-serving-cert\") pod \"route-controller-manager-6996cb79cf-4pdf6\" (UID: \"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7\") " pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.141413 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/470fbce8-0db2-421c-8653-5b7a35f10465-serving-cert\") pod \"controller-manager-8cbc6d589-8bqtl\" (UID: \"470fbce8-0db2-421c-8653-5b7a35f10465\") " pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.149193 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlqdg\" (UniqueName: \"kubernetes.io/projected/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-kube-api-access-zlqdg\") pod \"route-controller-manager-6996cb79cf-4pdf6\" (UID: \"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7\") " pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.152287 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdvbp\" (UniqueName: \"kubernetes.io/projected/470fbce8-0db2-421c-8653-5b7a35f10465-kube-api-access-pdvbp\") pod \"controller-manager-8cbc6d589-8bqtl\" (UID: \"470fbce8-0db2-421c-8653-5b7a35f10465\") " pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.194949 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.199232 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.292195 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.300142 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.301712 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.303882 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.303950 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.339207 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed932d37-9030-4389-8904-c7a8d5ab07d4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ed932d37-9030-4389-8904-c7a8d5ab07d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.339733 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed932d37-9030-4389-8904-c7a8d5ab07d4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ed932d37-9030-4389-8904-c7a8d5ab07d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.441146 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed932d37-9030-4389-8904-c7a8d5ab07d4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ed932d37-9030-4389-8904-c7a8d5ab07d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.441271 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed932d37-9030-4389-8904-c7a8d5ab07d4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ed932d37-9030-4389-8904-c7a8d5ab07d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.441330 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed932d37-9030-4389-8904-c7a8d5ab07d4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ed932d37-9030-4389-8904-c7a8d5ab07d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.457330 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed932d37-9030-4389-8904-c7a8d5ab07d4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ed932d37-9030-4389-8904-c7a8d5ab07d4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.599332 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8cbc6d589-8bqtl"] Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.601461 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6"] Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.621953 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 10:30:14 crc kubenswrapper[4796]: I1205 10:30:14.828131 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 05 10:30:14 crc kubenswrapper[4796]: W1205 10:30:14.843349 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poded932d37_9030_4389_8904_c7a8d5ab07d4.slice/crio-3097351ccee2c542729a64be732d5fa525e7065a3c38d576d8cd668cec10e824 WatchSource:0}: Error finding container 3097351ccee2c542729a64be732d5fa525e7065a3c38d576d8cd668cec10e824: Status 404 returned error can't find the container with id 3097351ccee2c542729a64be732d5fa525e7065a3c38d576d8cd668cec10e824 Dec 05 10:30:15 crc kubenswrapper[4796]: I1205 10:30:15.058491 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ed932d37-9030-4389-8904-c7a8d5ab07d4","Type":"ContainerStarted","Data":"3097351ccee2c542729a64be732d5fa525e7065a3c38d576d8cd668cec10e824"} Dec 05 10:30:15 crc kubenswrapper[4796]: I1205 10:30:15.060300 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" event={"ID":"470fbce8-0db2-421c-8653-5b7a35f10465","Type":"ContainerStarted","Data":"9b1aa1c7725cce6a98caddc5325706f0bb1dbf63da7d164c491e990f55c6de58"} Dec 05 10:30:15 crc kubenswrapper[4796]: I1205 10:30:15.060419 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" Dec 05 10:30:15 crc kubenswrapper[4796]: I1205 10:30:15.060497 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" event={"ID":"470fbce8-0db2-421c-8653-5b7a35f10465","Type":"ContainerStarted","Data":"94f5145228efbfbfe2cd6e824d7446bf4b9af83e7fcfeb74056d3a7bd07629d8"} Dec 05 10:30:15 crc kubenswrapper[4796]: I1205 10:30:15.062761 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" event={"ID":"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7","Type":"ContainerStarted","Data":"718b617c7ad8f83815d0a1c427dffe6cc893608f02a0dab531663c732c2550ea"} Dec 05 10:30:15 crc kubenswrapper[4796]: I1205 10:30:15.062964 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" event={"ID":"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7","Type":"ContainerStarted","Data":"83518dbb31038be9b418e86112672373284a953dbc45d67bf77d42b5282defe1"} Dec 05 10:30:15 crc kubenswrapper[4796]: I1205 10:30:15.063040 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" Dec 05 10:30:15 crc kubenswrapper[4796]: I1205 10:30:15.064804 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" Dec 05 10:30:15 crc kubenswrapper[4796]: I1205 10:30:15.068938 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" Dec 05 10:30:15 crc kubenswrapper[4796]: I1205 10:30:15.078452 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" podStartSLOduration=3.078441726 podStartE2EDuration="3.078441726s" podCreationTimestamp="2025-12-05 10:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:30:15.077254684 +0000 UTC m=+161.365360197" watchObservedRunningTime="2025-12-05 10:30:15.078441726 +0000 UTC m=+161.366547238" Dec 05 10:30:15 crc kubenswrapper[4796]: I1205 10:30:15.101702 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" podStartSLOduration=3.101676642 podStartE2EDuration="3.101676642s" podCreationTimestamp="2025-12-05 10:30:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:30:15.099943104 +0000 UTC m=+161.388048617" watchObservedRunningTime="2025-12-05 10:30:15.101676642 +0000 UTC m=+161.389782155" Dec 05 10:30:16 crc kubenswrapper[4796]: I1205 10:30:16.068498 4796 generic.go:334] "Generic (PLEG): container finished" podID="ed932d37-9030-4389-8904-c7a8d5ab07d4" containerID="a46add00a6fd5c936bdeeac1888461449e01c4911e5fdd6d7c294a9df7e66951" exitCode=0 Dec 05 10:30:16 crc kubenswrapper[4796]: I1205 10:30:16.068627 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ed932d37-9030-4389-8904-c7a8d5ab07d4","Type":"ContainerDied","Data":"a46add00a6fd5c936bdeeac1888461449e01c4911e5fdd6d7c294a9df7e66951"} Dec 05 10:30:16 crc kubenswrapper[4796]: I1205 10:30:16.972233 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs\") pod \"network-metrics-daemon-sqdfm\" (UID: \"dcf780ba-edff-45ee-88e9-5b99e4d0e458\") " pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:30:16 crc kubenswrapper[4796]: I1205 10:30:16.974620 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 05 10:30:16 crc kubenswrapper[4796]: I1205 10:30:16.987989 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dcf780ba-edff-45ee-88e9-5b99e4d0e458-metrics-certs\") pod \"network-metrics-daemon-sqdfm\" (UID: \"dcf780ba-edff-45ee-88e9-5b99e4d0e458\") " pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:30:17 crc kubenswrapper[4796]: I1205 10:30:17.242718 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 05 10:30:17 crc kubenswrapper[4796]: I1205 10:30:17.251037 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sqdfm" Dec 05 10:30:17 crc kubenswrapper[4796]: I1205 10:30:17.345301 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 10:30:17 crc kubenswrapper[4796]: I1205 10:30:17.376801 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed932d37-9030-4389-8904-c7a8d5ab07d4-kube-api-access\") pod \"ed932d37-9030-4389-8904-c7a8d5ab07d4\" (UID: \"ed932d37-9030-4389-8904-c7a8d5ab07d4\") " Dec 05 10:30:17 crc kubenswrapper[4796]: I1205 10:30:17.377000 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed932d37-9030-4389-8904-c7a8d5ab07d4-kubelet-dir\") pod \"ed932d37-9030-4389-8904-c7a8d5ab07d4\" (UID: \"ed932d37-9030-4389-8904-c7a8d5ab07d4\") " Dec 05 10:30:17 crc kubenswrapper[4796]: I1205 10:30:17.377135 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ed932d37-9030-4389-8904-c7a8d5ab07d4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ed932d37-9030-4389-8904-c7a8d5ab07d4" (UID: "ed932d37-9030-4389-8904-c7a8d5ab07d4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:30:17 crc kubenswrapper[4796]: I1205 10:30:17.377450 4796 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ed932d37-9030-4389-8904-c7a8d5ab07d4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:17 crc kubenswrapper[4796]: I1205 10:30:17.380446 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed932d37-9030-4389-8904-c7a8d5ab07d4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ed932d37-9030-4389-8904-c7a8d5ab07d4" (UID: "ed932d37-9030-4389-8904-c7a8d5ab07d4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:30:17 crc kubenswrapper[4796]: I1205 10:30:17.478612 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed932d37-9030-4389-8904-c7a8d5ab07d4-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:17 crc kubenswrapper[4796]: I1205 10:30:17.622783 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-sqdfm"] Dec 05 10:30:17 crc kubenswrapper[4796]: W1205 10:30:17.629095 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcf780ba_edff_45ee_88e9_5b99e4d0e458.slice/crio-3c8125dfbccb7e54c37835401add8c2b03bed489bc1d121773cedf440d892da4 WatchSource:0}: Error finding container 3c8125dfbccb7e54c37835401add8c2b03bed489bc1d121773cedf440d892da4: Status 404 returned error can't find the container with id 3c8125dfbccb7e54c37835401add8c2b03bed489bc1d121773cedf440d892da4 Dec 05 10:30:18 crc kubenswrapper[4796]: I1205 10:30:18.085775 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 05 10:30:18 crc kubenswrapper[4796]: I1205 10:30:18.085787 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ed932d37-9030-4389-8904-c7a8d5ab07d4","Type":"ContainerDied","Data":"3097351ccee2c542729a64be732d5fa525e7065a3c38d576d8cd668cec10e824"} Dec 05 10:30:18 crc kubenswrapper[4796]: I1205 10:30:18.086162 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3097351ccee2c542729a64be732d5fa525e7065a3c38d576d8cd668cec10e824" Dec 05 10:30:18 crc kubenswrapper[4796]: I1205 10:30:18.088995 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sqdfm" event={"ID":"dcf780ba-edff-45ee-88e9-5b99e4d0e458","Type":"ContainerStarted","Data":"e0419cf9666872bd9d24ea6af07d0465d49d7942ee3526ec9460165e72926585"} Dec 05 10:30:18 crc kubenswrapper[4796]: I1205 10:30:18.089028 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sqdfm" event={"ID":"dcf780ba-edff-45ee-88e9-5b99e4d0e458","Type":"ContainerStarted","Data":"0d9d0103dbcc1c59f32878450282a7596fc64bad423a99a09e2af93be620cd07"} Dec 05 10:30:18 crc kubenswrapper[4796]: I1205 10:30:18.089039 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sqdfm" event={"ID":"dcf780ba-edff-45ee-88e9-5b99e4d0e458","Type":"ContainerStarted","Data":"3c8125dfbccb7e54c37835401add8c2b03bed489bc1d121773cedf440d892da4"} Dec 05 10:30:18 crc kubenswrapper[4796]: I1205 10:30:18.108495 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-sqdfm" podStartSLOduration=143.108480025 podStartE2EDuration="2m23.108480025s" podCreationTimestamp="2025-12-05 10:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:30:18.105840455 +0000 UTC m=+164.393945968" watchObservedRunningTime="2025-12-05 10:30:18.108480025 +0000 UTC m=+164.396585538" Dec 05 10:30:21 crc kubenswrapper[4796]: I1205 10:30:21.080549 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 05 10:30:21 crc kubenswrapper[4796]: E1205 10:30:21.081472 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed932d37-9030-4389-8904-c7a8d5ab07d4" containerName="pruner" Dec 05 10:30:21 crc kubenswrapper[4796]: I1205 10:30:21.081484 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed932d37-9030-4389-8904-c7a8d5ab07d4" containerName="pruner" Dec 05 10:30:21 crc kubenswrapper[4796]: I1205 10:30:21.081586 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed932d37-9030-4389-8904-c7a8d5ab07d4" containerName="pruner" Dec 05 10:30:21 crc kubenswrapper[4796]: I1205 10:30:21.081964 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 05 10:30:21 crc kubenswrapper[4796]: I1205 10:30:21.084098 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 05 10:30:21 crc kubenswrapper[4796]: I1205 10:30:21.084201 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 05 10:30:21 crc kubenswrapper[4796]: I1205 10:30:21.089793 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 05 10:30:21 crc kubenswrapper[4796]: I1205 10:30:21.119083 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35c6cfc7-cda9-4c88-9354-27745015055f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"35c6cfc7-cda9-4c88-9354-27745015055f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 10:30:21 crc kubenswrapper[4796]: I1205 10:30:21.119119 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/35c6cfc7-cda9-4c88-9354-27745015055f-var-lock\") pod \"installer-9-crc\" (UID: \"35c6cfc7-cda9-4c88-9354-27745015055f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 10:30:21 crc kubenswrapper[4796]: I1205 10:30:21.119154 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35c6cfc7-cda9-4c88-9354-27745015055f-kube-api-access\") pod \"installer-9-crc\" (UID: \"35c6cfc7-cda9-4c88-9354-27745015055f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 10:30:21 crc kubenswrapper[4796]: I1205 10:30:21.220054 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35c6cfc7-cda9-4c88-9354-27745015055f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"35c6cfc7-cda9-4c88-9354-27745015055f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 10:30:21 crc kubenswrapper[4796]: I1205 10:30:21.220087 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/35c6cfc7-cda9-4c88-9354-27745015055f-var-lock\") pod \"installer-9-crc\" (UID: \"35c6cfc7-cda9-4c88-9354-27745015055f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 10:30:21 crc kubenswrapper[4796]: I1205 10:30:21.220119 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35c6cfc7-cda9-4c88-9354-27745015055f-kube-api-access\") pod \"installer-9-crc\" (UID: \"35c6cfc7-cda9-4c88-9354-27745015055f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 10:30:21 crc kubenswrapper[4796]: I1205 10:30:21.220150 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35c6cfc7-cda9-4c88-9354-27745015055f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"35c6cfc7-cda9-4c88-9354-27745015055f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 10:30:21 crc kubenswrapper[4796]: I1205 10:30:21.220209 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/35c6cfc7-cda9-4c88-9354-27745015055f-var-lock\") pod \"installer-9-crc\" (UID: \"35c6cfc7-cda9-4c88-9354-27745015055f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 10:30:21 crc kubenswrapper[4796]: I1205 10:30:21.237144 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35c6cfc7-cda9-4c88-9354-27745015055f-kube-api-access\") pod \"installer-9-crc\" (UID: \"35c6cfc7-cda9-4c88-9354-27745015055f\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 05 10:30:21 crc kubenswrapper[4796]: I1205 10:30:21.401441 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 05 10:30:21 crc kubenswrapper[4796]: I1205 10:30:21.752445 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 05 10:30:22 crc kubenswrapper[4796]: I1205 10:30:22.111064 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"35c6cfc7-cda9-4c88-9354-27745015055f","Type":"ContainerStarted","Data":"bca6bb00e8b083e80c3244608885a967b09402120d3b699675b3d1a67166933d"} Dec 05 10:30:22 crc kubenswrapper[4796]: I1205 10:30:22.111116 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"35c6cfc7-cda9-4c88-9354-27745015055f","Type":"ContainerStarted","Data":"9d2d487bce3b8c30c791fbeedb6908d3ec816d7f16e89d7b40940797076357c6"} Dec 05 10:30:22 crc kubenswrapper[4796]: I1205 10:30:22.124368 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.124353405 podStartE2EDuration="1.124353405s" podCreationTimestamp="2025-12-05 10:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:30:22.121231608 +0000 UTC m=+168.409337121" watchObservedRunningTime="2025-12-05 10:30:22.124353405 +0000 UTC m=+168.412458918" Dec 05 10:30:29 crc kubenswrapper[4796]: I1205 10:30:29.345588 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.026649 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8cbc6d589-8bqtl"] Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.026848 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" podUID="470fbce8-0db2-421c-8653-5b7a35f10465" containerName="controller-manager" containerID="cri-o://9b1aa1c7725cce6a98caddc5325706f0bb1dbf63da7d164c491e990f55c6de58" gracePeriod=30 Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.043285 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6"] Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.043635 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" podUID="4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7" containerName="route-controller-manager" containerID="cri-o://718b617c7ad8f83815d0a1c427dffe6cc893608f02a0dab531663c732c2550ea" gracePeriod=30 Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.088521 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" podUID="56729699-46b2-454c-83d5-9dce9d90ac49" containerName="oauth-openshift" containerID="cri-o://b7e7480081aeba60d1481e5174a463e334b0201e49f5d6061b9caae14514da6e" gracePeriod=15 Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.156199 4796 generic.go:334] "Generic (PLEG): container finished" podID="470fbce8-0db2-421c-8653-5b7a35f10465" containerID="9b1aa1c7725cce6a98caddc5325706f0bb1dbf63da7d164c491e990f55c6de58" exitCode=0 Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.156269 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" event={"ID":"470fbce8-0db2-421c-8653-5b7a35f10465","Type":"ContainerDied","Data":"9b1aa1c7725cce6a98caddc5325706f0bb1dbf63da7d164c491e990f55c6de58"} Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.157455 4796 generic.go:334] "Generic (PLEG): container finished" podID="4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7" containerID="718b617c7ad8f83815d0a1c427dffe6cc893608f02a0dab531663c732c2550ea" exitCode=0 Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.157490 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" event={"ID":"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7","Type":"ContainerDied","Data":"718b617c7ad8f83815d0a1c427dffe6cc893608f02a0dab531663c732c2550ea"} Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.496277 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.499518 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.507217 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.627848 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/470fbce8-0db2-421c-8653-5b7a35f10465-client-ca\") pod \"470fbce8-0db2-421c-8653-5b7a35f10465\" (UID: \"470fbce8-0db2-421c-8653-5b7a35f10465\") " Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.627898 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdtgs\" (UniqueName: \"kubernetes.io/projected/56729699-46b2-454c-83d5-9dce9d90ac49-kube-api-access-sdtgs\") pod \"56729699-46b2-454c-83d5-9dce9d90ac49\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.627917 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-serving-cert\") pod \"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7\" (UID: \"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7\") " Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.627939 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdvbp\" (UniqueName: \"kubernetes.io/projected/470fbce8-0db2-421c-8653-5b7a35f10465-kube-api-access-pdvbp\") pod \"470fbce8-0db2-421c-8653-5b7a35f10465\" (UID: \"470fbce8-0db2-421c-8653-5b7a35f10465\") " Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.627957 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-router-certs\") pod \"56729699-46b2-454c-83d5-9dce9d90ac49\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.627973 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/470fbce8-0db2-421c-8653-5b7a35f10465-serving-cert\") pod \"470fbce8-0db2-421c-8653-5b7a35f10465\" (UID: \"470fbce8-0db2-421c-8653-5b7a35f10465\") " Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.627989 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-template-login\") pod \"56729699-46b2-454c-83d5-9dce9d90ac49\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.628006 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-service-ca\") pod \"56729699-46b2-454c-83d5-9dce9d90ac49\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.628024 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-template-error\") pod \"56729699-46b2-454c-83d5-9dce9d90ac49\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.628039 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-trusted-ca-bundle\") pod \"56729699-46b2-454c-83d5-9dce9d90ac49\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.628054 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlqdg\" (UniqueName: \"kubernetes.io/projected/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-kube-api-access-zlqdg\") pod \"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7\" (UID: \"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7\") " Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.628073 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-cliconfig\") pod \"56729699-46b2-454c-83d5-9dce9d90ac49\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.628098 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-ocp-branding-template\") pod \"56729699-46b2-454c-83d5-9dce9d90ac49\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.628116 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-template-provider-selection\") pod \"56729699-46b2-454c-83d5-9dce9d90ac49\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.628130 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-audit-policies\") pod \"56729699-46b2-454c-83d5-9dce9d90ac49\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.628144 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/470fbce8-0db2-421c-8653-5b7a35f10465-proxy-ca-bundles\") pod \"470fbce8-0db2-421c-8653-5b7a35f10465\" (UID: \"470fbce8-0db2-421c-8653-5b7a35f10465\") " Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.628161 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56729699-46b2-454c-83d5-9dce9d90ac49-audit-dir\") pod \"56729699-46b2-454c-83d5-9dce9d90ac49\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.628174 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/470fbce8-0db2-421c-8653-5b7a35f10465-config\") pod \"470fbce8-0db2-421c-8653-5b7a35f10465\" (UID: \"470fbce8-0db2-421c-8653-5b7a35f10465\") " Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.628191 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-client-ca\") pod \"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7\" (UID: \"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7\") " Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.628206 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-config\") pod \"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7\" (UID: \"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7\") " Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.628225 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-idp-0-file-data\") pod \"56729699-46b2-454c-83d5-9dce9d90ac49\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.628240 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-serving-cert\") pod \"56729699-46b2-454c-83d5-9dce9d90ac49\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.628258 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-session\") pod \"56729699-46b2-454c-83d5-9dce9d90ac49\" (UID: \"56729699-46b2-454c-83d5-9dce9d90ac49\") " Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.628465 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/470fbce8-0db2-421c-8653-5b7a35f10465-client-ca" (OuterVolumeSpecName: "client-ca") pod "470fbce8-0db2-421c-8653-5b7a35f10465" (UID: "470fbce8-0db2-421c-8653-5b7a35f10465"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.628850 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56729699-46b2-454c-83d5-9dce9d90ac49-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "56729699-46b2-454c-83d5-9dce9d90ac49" (UID: "56729699-46b2-454c-83d5-9dce9d90ac49"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.628884 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "56729699-46b2-454c-83d5-9dce9d90ac49" (UID: "56729699-46b2-454c-83d5-9dce9d90ac49"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.629377 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-config" (OuterVolumeSpecName: "config") pod "4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7" (UID: "4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.629715 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "56729699-46b2-454c-83d5-9dce9d90ac49" (UID: "56729699-46b2-454c-83d5-9dce9d90ac49"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.629934 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "56729699-46b2-454c-83d5-9dce9d90ac49" (UID: "56729699-46b2-454c-83d5-9dce9d90ac49"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.629952 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/470fbce8-0db2-421c-8653-5b7a35f10465-config" (OuterVolumeSpecName: "config") pod "470fbce8-0db2-421c-8653-5b7a35f10465" (UID: "470fbce8-0db2-421c-8653-5b7a35f10465"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.630337 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-client-ca" (OuterVolumeSpecName: "client-ca") pod "4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7" (UID: "4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.630751 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "56729699-46b2-454c-83d5-9dce9d90ac49" (UID: "56729699-46b2-454c-83d5-9dce9d90ac49"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.633203 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-kube-api-access-zlqdg" (OuterVolumeSpecName: "kube-api-access-zlqdg") pod "4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7" (UID: "4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7"). InnerVolumeSpecName "kube-api-access-zlqdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.633226 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7" (UID: "4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.633283 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/470fbce8-0db2-421c-8653-5b7a35f10465-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "470fbce8-0db2-421c-8653-5b7a35f10465" (UID: "470fbce8-0db2-421c-8653-5b7a35f10465"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.633286 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/470fbce8-0db2-421c-8653-5b7a35f10465-kube-api-access-pdvbp" (OuterVolumeSpecName: "kube-api-access-pdvbp") pod "470fbce8-0db2-421c-8653-5b7a35f10465" (UID: "470fbce8-0db2-421c-8653-5b7a35f10465"). InnerVolumeSpecName "kube-api-access-pdvbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.633482 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "56729699-46b2-454c-83d5-9dce9d90ac49" (UID: "56729699-46b2-454c-83d5-9dce9d90ac49"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.633631 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/470fbce8-0db2-421c-8653-5b7a35f10465-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "470fbce8-0db2-421c-8653-5b7a35f10465" (UID: "470fbce8-0db2-421c-8653-5b7a35f10465"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.633644 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56729699-46b2-454c-83d5-9dce9d90ac49-kube-api-access-sdtgs" (OuterVolumeSpecName: "kube-api-access-sdtgs") pod "56729699-46b2-454c-83d5-9dce9d90ac49" (UID: "56729699-46b2-454c-83d5-9dce9d90ac49"). InnerVolumeSpecName "kube-api-access-sdtgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.633725 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "56729699-46b2-454c-83d5-9dce9d90ac49" (UID: "56729699-46b2-454c-83d5-9dce9d90ac49"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.633925 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "56729699-46b2-454c-83d5-9dce9d90ac49" (UID: "56729699-46b2-454c-83d5-9dce9d90ac49"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.634123 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "56729699-46b2-454c-83d5-9dce9d90ac49" (UID: "56729699-46b2-454c-83d5-9dce9d90ac49"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.634215 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "56729699-46b2-454c-83d5-9dce9d90ac49" (UID: "56729699-46b2-454c-83d5-9dce9d90ac49"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.634429 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "56729699-46b2-454c-83d5-9dce9d90ac49" (UID: "56729699-46b2-454c-83d5-9dce9d90ac49"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.634462 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "56729699-46b2-454c-83d5-9dce9d90ac49" (UID: "56729699-46b2-454c-83d5-9dce9d90ac49"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.635455 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "56729699-46b2-454c-83d5-9dce9d90ac49" (UID: "56729699-46b2-454c-83d5-9dce9d90ac49"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.729820 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.729846 4796 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.729857 4796 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/470fbce8-0db2-421c-8653-5b7a35f10465-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.729865 4796 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56729699-46b2-454c-83d5-9dce9d90ac49-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.729873 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/470fbce8-0db2-421c-8653-5b7a35f10465-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.729881 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.729888 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.729896 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.729906 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.729914 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.729922 4796 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/470fbce8-0db2-421c-8653-5b7a35f10465-client-ca\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.729929 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.729936 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdtgs\" (UniqueName: \"kubernetes.io/projected/56729699-46b2-454c-83d5-9dce9d90ac49-kube-api-access-sdtgs\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.729943 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdvbp\" (UniqueName: \"kubernetes.io/projected/470fbce8-0db2-421c-8653-5b7a35f10465-kube-api-access-pdvbp\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.729951 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.729960 4796 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/470fbce8-0db2-421c-8653-5b7a35f10465-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.729968 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.729975 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.729983 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.729991 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.729998 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlqdg\" (UniqueName: \"kubernetes.io/projected/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7-kube-api-access-zlqdg\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.730006 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:32 crc kubenswrapper[4796]: I1205 10:30:32.730013 4796 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/56729699-46b2-454c-83d5-9dce9d90ac49-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.162351 4796 generic.go:334] "Generic (PLEG): container finished" podID="56729699-46b2-454c-83d5-9dce9d90ac49" containerID="b7e7480081aeba60d1481e5174a463e334b0201e49f5d6061b9caae14514da6e" exitCode=0 Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.162388 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" event={"ID":"56729699-46b2-454c-83d5-9dce9d90ac49","Type":"ContainerDied","Data":"b7e7480081aeba60d1481e5174a463e334b0201e49f5d6061b9caae14514da6e"} Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.162442 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" event={"ID":"56729699-46b2-454c-83d5-9dce9d90ac49","Type":"ContainerDied","Data":"716acb3d87f0fa423b63c8e082d22df1adea00e3b123dc1ff91eec8963682a25"} Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.162458 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qvpdg" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.162461 4796 scope.go:117] "RemoveContainer" containerID="b7e7480081aeba60d1481e5174a463e334b0201e49f5d6061b9caae14514da6e" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.164893 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.164891 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6" event={"ID":"4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7","Type":"ContainerDied","Data":"83518dbb31038be9b418e86112672373284a953dbc45d67bf77d42b5282defe1"} Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.166451 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" event={"ID":"470fbce8-0db2-421c-8653-5b7a35f10465","Type":"ContainerDied","Data":"94f5145228efbfbfe2cd6e824d7446bf4b9af83e7fcfeb74056d3a7bd07629d8"} Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.166492 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8cbc6d589-8bqtl" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.181284 4796 scope.go:117] "RemoveContainer" containerID="b7e7480081aeba60d1481e5174a463e334b0201e49f5d6061b9caae14514da6e" Dec 05 10:30:33 crc kubenswrapper[4796]: E1205 10:30:33.182102 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7e7480081aeba60d1481e5174a463e334b0201e49f5d6061b9caae14514da6e\": container with ID starting with b7e7480081aeba60d1481e5174a463e334b0201e49f5d6061b9caae14514da6e not found: ID does not exist" containerID="b7e7480081aeba60d1481e5174a463e334b0201e49f5d6061b9caae14514da6e" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.182148 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7e7480081aeba60d1481e5174a463e334b0201e49f5d6061b9caae14514da6e"} err="failed to get container status \"b7e7480081aeba60d1481e5174a463e334b0201e49f5d6061b9caae14514da6e\": rpc error: code = NotFound desc = could not find container \"b7e7480081aeba60d1481e5174a463e334b0201e49f5d6061b9caae14514da6e\": container with ID starting with b7e7480081aeba60d1481e5174a463e334b0201e49f5d6061b9caae14514da6e not found: ID does not exist" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.182166 4796 scope.go:117] "RemoveContainer" containerID="718b617c7ad8f83815d0a1c427dffe6cc893608f02a0dab531663c732c2550ea" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.193435 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6"] Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.197268 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6996cb79cf-4pdf6"] Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.199597 4796 scope.go:117] "RemoveContainer" containerID="9b1aa1c7725cce6a98caddc5325706f0bb1dbf63da7d164c491e990f55c6de58" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.199809 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qvpdg"] Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.202325 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qvpdg"] Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.209038 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8cbc6d589-8bqtl"] Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.211203 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8cbc6d589-8bqtl"] Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.887330 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b89f5bf68-hcq8f"] Dec 05 10:30:33 crc kubenswrapper[4796]: E1205 10:30:33.887533 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7" containerName="route-controller-manager" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.887552 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7" containerName="route-controller-manager" Dec 05 10:30:33 crc kubenswrapper[4796]: E1205 10:30:33.887575 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56729699-46b2-454c-83d5-9dce9d90ac49" containerName="oauth-openshift" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.887580 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="56729699-46b2-454c-83d5-9dce9d90ac49" containerName="oauth-openshift" Dec 05 10:30:33 crc kubenswrapper[4796]: E1205 10:30:33.887589 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="470fbce8-0db2-421c-8653-5b7a35f10465" containerName="controller-manager" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.887595 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="470fbce8-0db2-421c-8653-5b7a35f10465" containerName="controller-manager" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.887670 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="470fbce8-0db2-421c-8653-5b7a35f10465" containerName="controller-manager" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.887702 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7" containerName="route-controller-manager" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.887713 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="56729699-46b2-454c-83d5-9dce9d90ac49" containerName="oauth-openshift" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.888035 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b89f5bf68-hcq8f" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.889359 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d9558bd48-mzdll"] Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.889633 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.889637 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.889637 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.889877 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.889920 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d9558bd48-mzdll" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.890285 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.890314 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.891292 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.891386 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.891605 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.891734 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.891737 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.892101 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.901373 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d9558bd48-mzdll"] Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.901539 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 05 10:30:33 crc kubenswrapper[4796]: I1205 10:30:33.903162 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b89f5bf68-hcq8f"] Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.037207 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="470fbce8-0db2-421c-8653-5b7a35f10465" path="/var/lib/kubelet/pods/470fbce8-0db2-421c-8653-5b7a35f10465/volumes" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.037798 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7" path="/var/lib/kubelet/pods/4f4e8043-b6b3-44d5-9a75-36a1ae2dcfb7/volumes" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.038305 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56729699-46b2-454c-83d5-9dce9d90ac49" path="/var/lib/kubelet/pods/56729699-46b2-454c-83d5-9dce9d90ac49/volumes" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.040791 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qljft\" (UniqueName: \"kubernetes.io/projected/3ff6cfac-7d49-4521-9372-98e90dd5e3a9-kube-api-access-qljft\") pod \"route-controller-manager-6d9558bd48-mzdll\" (UID: \"3ff6cfac-7d49-4521-9372-98e90dd5e3a9\") " pod="openshift-route-controller-manager/route-controller-manager-6d9558bd48-mzdll" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.040825 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef390048-8252-421a-8943-a313569746e0-serving-cert\") pod \"controller-manager-b89f5bf68-hcq8f\" (UID: \"ef390048-8252-421a-8943-a313569746e0\") " pod="openshift-controller-manager/controller-manager-b89f5bf68-hcq8f" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.040845 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef390048-8252-421a-8943-a313569746e0-config\") pod \"controller-manager-b89f5bf68-hcq8f\" (UID: \"ef390048-8252-421a-8943-a313569746e0\") " pod="openshift-controller-manager/controller-manager-b89f5bf68-hcq8f" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.040862 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef390048-8252-421a-8943-a313569746e0-proxy-ca-bundles\") pod \"controller-manager-b89f5bf68-hcq8f\" (UID: \"ef390048-8252-421a-8943-a313569746e0\") " pod="openshift-controller-manager/controller-manager-b89f5bf68-hcq8f" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.040883 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsqjv\" (UniqueName: \"kubernetes.io/projected/ef390048-8252-421a-8943-a313569746e0-kube-api-access-fsqjv\") pod \"controller-manager-b89f5bf68-hcq8f\" (UID: \"ef390048-8252-421a-8943-a313569746e0\") " pod="openshift-controller-manager/controller-manager-b89f5bf68-hcq8f" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.040905 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef390048-8252-421a-8943-a313569746e0-client-ca\") pod \"controller-manager-b89f5bf68-hcq8f\" (UID: \"ef390048-8252-421a-8943-a313569746e0\") " pod="openshift-controller-manager/controller-manager-b89f5bf68-hcq8f" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.040919 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ff6cfac-7d49-4521-9372-98e90dd5e3a9-client-ca\") pod \"route-controller-manager-6d9558bd48-mzdll\" (UID: \"3ff6cfac-7d49-4521-9372-98e90dd5e3a9\") " pod="openshift-route-controller-manager/route-controller-manager-6d9558bd48-mzdll" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.040939 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ff6cfac-7d49-4521-9372-98e90dd5e3a9-serving-cert\") pod \"route-controller-manager-6d9558bd48-mzdll\" (UID: \"3ff6cfac-7d49-4521-9372-98e90dd5e3a9\") " pod="openshift-route-controller-manager/route-controller-manager-6d9558bd48-mzdll" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.040972 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ff6cfac-7d49-4521-9372-98e90dd5e3a9-config\") pod \"route-controller-manager-6d9558bd48-mzdll\" (UID: \"3ff6cfac-7d49-4521-9372-98e90dd5e3a9\") " pod="openshift-route-controller-manager/route-controller-manager-6d9558bd48-mzdll" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.141503 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ff6cfac-7d49-4521-9372-98e90dd5e3a9-config\") pod \"route-controller-manager-6d9558bd48-mzdll\" (UID: \"3ff6cfac-7d49-4521-9372-98e90dd5e3a9\") " pod="openshift-route-controller-manager/route-controller-manager-6d9558bd48-mzdll" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.141575 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qljft\" (UniqueName: \"kubernetes.io/projected/3ff6cfac-7d49-4521-9372-98e90dd5e3a9-kube-api-access-qljft\") pod \"route-controller-manager-6d9558bd48-mzdll\" (UID: \"3ff6cfac-7d49-4521-9372-98e90dd5e3a9\") " pod="openshift-route-controller-manager/route-controller-manager-6d9558bd48-mzdll" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.141596 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef390048-8252-421a-8943-a313569746e0-serving-cert\") pod \"controller-manager-b89f5bf68-hcq8f\" (UID: \"ef390048-8252-421a-8943-a313569746e0\") " pod="openshift-controller-manager/controller-manager-b89f5bf68-hcq8f" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.141626 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef390048-8252-421a-8943-a313569746e0-config\") pod \"controller-manager-b89f5bf68-hcq8f\" (UID: \"ef390048-8252-421a-8943-a313569746e0\") " pod="openshift-controller-manager/controller-manager-b89f5bf68-hcq8f" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.141640 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef390048-8252-421a-8943-a313569746e0-proxy-ca-bundles\") pod \"controller-manager-b89f5bf68-hcq8f\" (UID: \"ef390048-8252-421a-8943-a313569746e0\") " pod="openshift-controller-manager/controller-manager-b89f5bf68-hcq8f" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.141654 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsqjv\" (UniqueName: \"kubernetes.io/projected/ef390048-8252-421a-8943-a313569746e0-kube-api-access-fsqjv\") pod \"controller-manager-b89f5bf68-hcq8f\" (UID: \"ef390048-8252-421a-8943-a313569746e0\") " pod="openshift-controller-manager/controller-manager-b89f5bf68-hcq8f" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.141724 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef390048-8252-421a-8943-a313569746e0-client-ca\") pod \"controller-manager-b89f5bf68-hcq8f\" (UID: \"ef390048-8252-421a-8943-a313569746e0\") " pod="openshift-controller-manager/controller-manager-b89f5bf68-hcq8f" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.141742 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ff6cfac-7d49-4521-9372-98e90dd5e3a9-client-ca\") pod \"route-controller-manager-6d9558bd48-mzdll\" (UID: \"3ff6cfac-7d49-4521-9372-98e90dd5e3a9\") " pod="openshift-route-controller-manager/route-controller-manager-6d9558bd48-mzdll" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.141761 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ff6cfac-7d49-4521-9372-98e90dd5e3a9-serving-cert\") pod \"route-controller-manager-6d9558bd48-mzdll\" (UID: \"3ff6cfac-7d49-4521-9372-98e90dd5e3a9\") " pod="openshift-route-controller-manager/route-controller-manager-6d9558bd48-mzdll" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.143713 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.144015 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.144290 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.144372 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.144455 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.145643 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.148025 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.153568 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3ff6cfac-7d49-4521-9372-98e90dd5e3a9-client-ca\") pod \"route-controller-manager-6d9558bd48-mzdll\" (UID: \"3ff6cfac-7d49-4521-9372-98e90dd5e3a9\") " pod="openshift-route-controller-manager/route-controller-manager-6d9558bd48-mzdll" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.153699 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef390048-8252-421a-8943-a313569746e0-client-ca\") pod \"controller-manager-b89f5bf68-hcq8f\" (UID: \"ef390048-8252-421a-8943-a313569746e0\") " pod="openshift-controller-manager/controller-manager-b89f5bf68-hcq8f" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.153979 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef390048-8252-421a-8943-a313569746e0-config\") pod \"controller-manager-b89f5bf68-hcq8f\" (UID: \"ef390048-8252-421a-8943-a313569746e0\") " pod="openshift-controller-manager/controller-manager-b89f5bf68-hcq8f" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.154598 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.155048 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.155308 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ff6cfac-7d49-4521-9372-98e90dd5e3a9-serving-cert\") pod \"route-controller-manager-6d9558bd48-mzdll\" (UID: \"3ff6cfac-7d49-4521-9372-98e90dd5e3a9\") " pod="openshift-route-controller-manager/route-controller-manager-6d9558bd48-mzdll" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.155674 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef390048-8252-421a-8943-a313569746e0-serving-cert\") pod \"controller-manager-b89f5bf68-hcq8f\" (UID: \"ef390048-8252-421a-8943-a313569746e0\") " pod="openshift-controller-manager/controller-manager-b89f5bf68-hcq8f" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.158965 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ff6cfac-7d49-4521-9372-98e90dd5e3a9-config\") pod \"route-controller-manager-6d9558bd48-mzdll\" (UID: \"3ff6cfac-7d49-4521-9372-98e90dd5e3a9\") " pod="openshift-route-controller-manager/route-controller-manager-6d9558bd48-mzdll" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.161434 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ef390048-8252-421a-8943-a313569746e0-proxy-ca-bundles\") pod \"controller-manager-b89f5bf68-hcq8f\" (UID: \"ef390048-8252-421a-8943-a313569746e0\") " pod="openshift-controller-manager/controller-manager-b89f5bf68-hcq8f" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.163478 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.163868 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.175861 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsqjv\" (UniqueName: \"kubernetes.io/projected/ef390048-8252-421a-8943-a313569746e0-kube-api-access-fsqjv\") pod \"controller-manager-b89f5bf68-hcq8f\" (UID: \"ef390048-8252-421a-8943-a313569746e0\") " pod="openshift-controller-manager/controller-manager-b89f5bf68-hcq8f" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.177128 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qljft\" (UniqueName: \"kubernetes.io/projected/3ff6cfac-7d49-4521-9372-98e90dd5e3a9-kube-api-access-qljft\") pod \"route-controller-manager-6d9558bd48-mzdll\" (UID: \"3ff6cfac-7d49-4521-9372-98e90dd5e3a9\") " pod="openshift-route-controller-manager/route-controller-manager-6d9558bd48-mzdll" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.203318 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.207033 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.212962 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b89f5bf68-hcq8f" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.217763 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d9558bd48-mzdll" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.544143 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d9558bd48-mzdll"] Dec 05 10:30:34 crc kubenswrapper[4796]: W1205 10:30:34.547634 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ff6cfac_7d49_4521_9372_98e90dd5e3a9.slice/crio-6ac102c39b90f81647830b1f3a2f6e3233b67f5bc808ad0b19761c960a4ff1d1 WatchSource:0}: Error finding container 6ac102c39b90f81647830b1f3a2f6e3233b67f5bc808ad0b19761c960a4ff1d1: Status 404 returned error can't find the container with id 6ac102c39b90f81647830b1f3a2f6e3233b67f5bc808ad0b19761c960a4ff1d1 Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.575341 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b89f5bf68-hcq8f"] Dec 05 10:30:34 crc kubenswrapper[4796]: W1205 10:30:34.586454 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef390048_8252_421a_8943_a313569746e0.slice/crio-20aff4cbbb2e09dcf39aefd155db7857f69a6b8f5a4bce30074f26ff55ea3077 WatchSource:0}: Error finding container 20aff4cbbb2e09dcf39aefd155db7857f69a6b8f5a4bce30074f26ff55ea3077: Status 404 returned error can't find the container with id 20aff4cbbb2e09dcf39aefd155db7857f69a6b8f5a4bce30074f26ff55ea3077 Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.889446 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn"] Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.890131 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.894296 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.894452 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.894776 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.895089 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.895308 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.895434 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.895748 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.895889 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.895990 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.896842 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.898196 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.898247 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.899502 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn"] Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.903235 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.904032 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 05 10:30:34 crc kubenswrapper[4796]: I1205 10:30:34.912942 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.049665 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.049711 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3b32a4f9-195e-432a-959d-37d6b6a993cc-audit-dir\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.049733 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.049752 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-user-template-login\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.049771 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.049863 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.049929 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.049959 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-user-template-error\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.049979 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-system-session\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.050017 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.050045 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.050061 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5rkk\" (UniqueName: \"kubernetes.io/projected/3b32a4f9-195e-432a-959d-37d6b6a993cc-kube-api-access-h5rkk\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.050075 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3b32a4f9-195e-432a-959d-37d6b6a993cc-audit-policies\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.050093 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.150940 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.150977 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5rkk\" (UniqueName: \"kubernetes.io/projected/3b32a4f9-195e-432a-959d-37d6b6a993cc-kube-api-access-h5rkk\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.150999 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3b32a4f9-195e-432a-959d-37d6b6a993cc-audit-policies\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.151018 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.151035 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.151052 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3b32a4f9-195e-432a-959d-37d6b6a993cc-audit-dir\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.151066 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.151084 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-user-template-login\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.151109 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.151129 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.151151 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.151170 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-user-template-error\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.151189 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-system-session\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.151210 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.151483 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3b32a4f9-195e-432a-959d-37d6b6a993cc-audit-dir\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.151962 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.152011 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3b32a4f9-195e-432a-959d-37d6b6a993cc-audit-policies\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.152028 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.152025 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.155391 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-user-template-error\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.155609 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-user-template-login\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.155892 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.155930 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.156015 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.156137 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-system-session\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.156249 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.156484 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3b32a4f9-195e-432a-959d-37d6b6a993cc-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.164743 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5rkk\" (UniqueName: \"kubernetes.io/projected/3b32a4f9-195e-432a-959d-37d6b6a993cc-kube-api-access-h5rkk\") pod \"oauth-openshift-7d5bfb7cdd-48fxn\" (UID: \"3b32a4f9-195e-432a-959d-37d6b6a993cc\") " pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.176375 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d9558bd48-mzdll" event={"ID":"3ff6cfac-7d49-4521-9372-98e90dd5e3a9","Type":"ContainerStarted","Data":"ef0cfef1d75a462d039250fbf3d2b4044e986c069fed66472ad0ec4be4319bb3"} Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.176421 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d9558bd48-mzdll" event={"ID":"3ff6cfac-7d49-4521-9372-98e90dd5e3a9","Type":"ContainerStarted","Data":"6ac102c39b90f81647830b1f3a2f6e3233b67f5bc808ad0b19761c960a4ff1d1"} Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.176630 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d9558bd48-mzdll" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.176850 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.176886 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.178861 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b89f5bf68-hcq8f" event={"ID":"ef390048-8252-421a-8943-a313569746e0","Type":"ContainerStarted","Data":"67b08b68e0581c31d043df825b66e8a1155159a6dfe7d82146c4e2f7bd147ec3"} Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.178888 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b89f5bf68-hcq8f" event={"ID":"ef390048-8252-421a-8943-a313569746e0","Type":"ContainerStarted","Data":"20aff4cbbb2e09dcf39aefd155db7857f69a6b8f5a4bce30074f26ff55ea3077"} Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.179312 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b89f5bf68-hcq8f" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.180295 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d9558bd48-mzdll" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.184063 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b89f5bf68-hcq8f" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.188223 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d9558bd48-mzdll" podStartSLOduration=3.188211471 podStartE2EDuration="3.188211471s" podCreationTimestamp="2025-12-05 10:30:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:30:35.187327631 +0000 UTC m=+181.475433143" watchObservedRunningTime="2025-12-05 10:30:35.188211471 +0000 UTC m=+181.476316984" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.201779 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.202522 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b89f5bf68-hcq8f" podStartSLOduration=3.202512998 podStartE2EDuration="3.202512998s" podCreationTimestamp="2025-12-05 10:30:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:30:35.200022599 +0000 UTC m=+181.488128112" watchObservedRunningTime="2025-12-05 10:30:35.202512998 +0000 UTC m=+181.490618512" Dec 05 10:30:35 crc kubenswrapper[4796]: I1205 10:30:35.553260 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn"] Dec 05 10:30:35 crc kubenswrapper[4796]: W1205 10:30:35.559299 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b32a4f9_195e_432a_959d_37d6b6a993cc.slice/crio-f96c54b8681960dadfc61964a4329aa1bd09b56aa71a7e09788c8177c10293b1 WatchSource:0}: Error finding container f96c54b8681960dadfc61964a4329aa1bd09b56aa71a7e09788c8177c10293b1: Status 404 returned error can't find the container with id f96c54b8681960dadfc61964a4329aa1bd09b56aa71a7e09788c8177c10293b1 Dec 05 10:30:36 crc kubenswrapper[4796]: I1205 10:30:36.185452 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" event={"ID":"3b32a4f9-195e-432a-959d-37d6b6a993cc","Type":"ContainerStarted","Data":"dfc9fc272806a0d96f7715ce1d4d75318a89cd007f202f867a334252f7355852"} Dec 05 10:30:36 crc kubenswrapper[4796]: I1205 10:30:36.185776 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" event={"ID":"3b32a4f9-195e-432a-959d-37d6b6a993cc","Type":"ContainerStarted","Data":"f96c54b8681960dadfc61964a4329aa1bd09b56aa71a7e09788c8177c10293b1"} Dec 05 10:30:36 crc kubenswrapper[4796]: I1205 10:30:36.200282 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" podStartSLOduration=29.200261918 podStartE2EDuration="29.200261918s" podCreationTimestamp="2025-12-05 10:30:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:30:36.199196165 +0000 UTC m=+182.487301678" watchObservedRunningTime="2025-12-05 10:30:36.200261918 +0000 UTC m=+182.488367431" Dec 05 10:30:37 crc kubenswrapper[4796]: I1205 10:30:37.191250 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:37 crc kubenswrapper[4796]: I1205 10:30:37.196332 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7d5bfb7cdd-48fxn" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.028071 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zptnp"] Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.028722 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zptnp" podUID="9b4178e4-10a3-4011-8994-b7ca6f64b45d" containerName="registry-server" containerID="cri-o://4bbd9db58b22a6fee8f59144defafcc2e06243e23093334fa25e1cd4bd2444b2" gracePeriod=30 Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.036576 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-snx4m"] Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.036745 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-snx4m" podUID="49519f1b-a5e0-4d0f-bf1d-d6927a8f0957" containerName="registry-server" containerID="cri-o://3ff3a937cabf9836c0a3e2de56134418d16bca907552095d3d9f295e02a04370" gracePeriod=30 Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.056753 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gmlb4"] Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.057136 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-gmlb4" podUID="356c2f0d-a078-42e4-925a-e4f39864eb48" containerName="marketplace-operator" containerID="cri-o://0d6153913013fd54860ad502a25257bf3499bc1bb66acf947da50cb7f5bce740" gracePeriod=30 Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.058524 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fdddq"] Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.059118 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fdddq" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.062836 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9xblc"] Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.063010 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9xblc" podUID="656b901f-73ef-4d9d-adb1-a8db28382f48" containerName="registry-server" containerID="cri-o://a71605dfe3217f3b15fc4e9076d36c1409b4eb1b3ff0e1059871bf3772406630" gracePeriod=30 Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.066258 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8gj86"] Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.066472 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8gj86" podUID="2786ac62-1a3d-46fe-951c-be542f08bf55" containerName="registry-server" containerID="cri-o://8270e33358c8830e71bd2ed5044b195492f393282a16c3c044aee73060e751a9" gracePeriod=30 Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.070821 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fdddq"] Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.174200 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c6b04e09-09cc-4ca6-a5bd-61a46535f226-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fdddq\" (UID: \"c6b04e09-09cc-4ca6-a5bd-61a46535f226\") " pod="openshift-marketplace/marketplace-operator-79b997595-fdddq" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.174267 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6b04e09-09cc-4ca6-a5bd-61a46535f226-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fdddq\" (UID: \"c6b04e09-09cc-4ca6-a5bd-61a46535f226\") " pod="openshift-marketplace/marketplace-operator-79b997595-fdddq" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.174287 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26h9b\" (UniqueName: \"kubernetes.io/projected/c6b04e09-09cc-4ca6-a5bd-61a46535f226-kube-api-access-26h9b\") pod \"marketplace-operator-79b997595-fdddq\" (UID: \"c6b04e09-09cc-4ca6-a5bd-61a46535f226\") " pod="openshift-marketplace/marketplace-operator-79b997595-fdddq" Dec 05 10:30:47 crc kubenswrapper[4796]: E1205 10:30:47.177113 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a71605dfe3217f3b15fc4e9076d36c1409b4eb1b3ff0e1059871bf3772406630 is running failed: container process not found" containerID="a71605dfe3217f3b15fc4e9076d36c1409b4eb1b3ff0e1059871bf3772406630" cmd=["grpc_health_probe","-addr=:50051"] Dec 05 10:30:47 crc kubenswrapper[4796]: E1205 10:30:47.177525 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a71605dfe3217f3b15fc4e9076d36c1409b4eb1b3ff0e1059871bf3772406630 is running failed: container process not found" containerID="a71605dfe3217f3b15fc4e9076d36c1409b4eb1b3ff0e1059871bf3772406630" cmd=["grpc_health_probe","-addr=:50051"] Dec 05 10:30:47 crc kubenswrapper[4796]: E1205 10:30:47.177793 4796 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a71605dfe3217f3b15fc4e9076d36c1409b4eb1b3ff0e1059871bf3772406630 is running failed: container process not found" containerID="a71605dfe3217f3b15fc4e9076d36c1409b4eb1b3ff0e1059871bf3772406630" cmd=["grpc_health_probe","-addr=:50051"] Dec 05 10:30:47 crc kubenswrapper[4796]: E1205 10:30:47.177824 4796 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a71605dfe3217f3b15fc4e9076d36c1409b4eb1b3ff0e1059871bf3772406630 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-9xblc" podUID="656b901f-73ef-4d9d-adb1-a8db28382f48" containerName="registry-server" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.229720 4796 generic.go:334] "Generic (PLEG): container finished" podID="49519f1b-a5e0-4d0f-bf1d-d6927a8f0957" containerID="3ff3a937cabf9836c0a3e2de56134418d16bca907552095d3d9f295e02a04370" exitCode=0 Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.229783 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snx4m" event={"ID":"49519f1b-a5e0-4d0f-bf1d-d6927a8f0957","Type":"ContainerDied","Data":"3ff3a937cabf9836c0a3e2de56134418d16bca907552095d3d9f295e02a04370"} Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.232619 4796 generic.go:334] "Generic (PLEG): container finished" podID="2786ac62-1a3d-46fe-951c-be542f08bf55" containerID="8270e33358c8830e71bd2ed5044b195492f393282a16c3c044aee73060e751a9" exitCode=0 Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.232652 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gj86" event={"ID":"2786ac62-1a3d-46fe-951c-be542f08bf55","Type":"ContainerDied","Data":"8270e33358c8830e71bd2ed5044b195492f393282a16c3c044aee73060e751a9"} Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.235979 4796 generic.go:334] "Generic (PLEG): container finished" podID="9b4178e4-10a3-4011-8994-b7ca6f64b45d" containerID="4bbd9db58b22a6fee8f59144defafcc2e06243e23093334fa25e1cd4bd2444b2" exitCode=0 Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.236020 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zptnp" event={"ID":"9b4178e4-10a3-4011-8994-b7ca6f64b45d","Type":"ContainerDied","Data":"4bbd9db58b22a6fee8f59144defafcc2e06243e23093334fa25e1cd4bd2444b2"} Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.237333 4796 generic.go:334] "Generic (PLEG): container finished" podID="356c2f0d-a078-42e4-925a-e4f39864eb48" containerID="0d6153913013fd54860ad502a25257bf3499bc1bb66acf947da50cb7f5bce740" exitCode=0 Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.237381 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gmlb4" event={"ID":"356c2f0d-a078-42e4-925a-e4f39864eb48","Type":"ContainerDied","Data":"0d6153913013fd54860ad502a25257bf3499bc1bb66acf947da50cb7f5bce740"} Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.239278 4796 generic.go:334] "Generic (PLEG): container finished" podID="656b901f-73ef-4d9d-adb1-a8db28382f48" containerID="a71605dfe3217f3b15fc4e9076d36c1409b4eb1b3ff0e1059871bf3772406630" exitCode=0 Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.239316 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9xblc" event={"ID":"656b901f-73ef-4d9d-adb1-a8db28382f48","Type":"ContainerDied","Data":"a71605dfe3217f3b15fc4e9076d36c1409b4eb1b3ff0e1059871bf3772406630"} Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.275485 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6b04e09-09cc-4ca6-a5bd-61a46535f226-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fdddq\" (UID: \"c6b04e09-09cc-4ca6-a5bd-61a46535f226\") " pod="openshift-marketplace/marketplace-operator-79b997595-fdddq" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.275519 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26h9b\" (UniqueName: \"kubernetes.io/projected/c6b04e09-09cc-4ca6-a5bd-61a46535f226-kube-api-access-26h9b\") pod \"marketplace-operator-79b997595-fdddq\" (UID: \"c6b04e09-09cc-4ca6-a5bd-61a46535f226\") " pod="openshift-marketplace/marketplace-operator-79b997595-fdddq" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.275592 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c6b04e09-09cc-4ca6-a5bd-61a46535f226-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fdddq\" (UID: \"c6b04e09-09cc-4ca6-a5bd-61a46535f226\") " pod="openshift-marketplace/marketplace-operator-79b997595-fdddq" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.276389 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c6b04e09-09cc-4ca6-a5bd-61a46535f226-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fdddq\" (UID: \"c6b04e09-09cc-4ca6-a5bd-61a46535f226\") " pod="openshift-marketplace/marketplace-operator-79b997595-fdddq" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.280045 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c6b04e09-09cc-4ca6-a5bd-61a46535f226-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fdddq\" (UID: \"c6b04e09-09cc-4ca6-a5bd-61a46535f226\") " pod="openshift-marketplace/marketplace-operator-79b997595-fdddq" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.287567 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26h9b\" (UniqueName: \"kubernetes.io/projected/c6b04e09-09cc-4ca6-a5bd-61a46535f226-kube-api-access-26h9b\") pod \"marketplace-operator-79b997595-fdddq\" (UID: \"c6b04e09-09cc-4ca6-a5bd-61a46535f226\") " pod="openshift-marketplace/marketplace-operator-79b997595-fdddq" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.372136 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fdddq" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.458467 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zptnp" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.483954 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpjsg\" (UniqueName: \"kubernetes.io/projected/9b4178e4-10a3-4011-8994-b7ca6f64b45d-kube-api-access-lpjsg\") pod \"9b4178e4-10a3-4011-8994-b7ca6f64b45d\" (UID: \"9b4178e4-10a3-4011-8994-b7ca6f64b45d\") " Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.484008 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b4178e4-10a3-4011-8994-b7ca6f64b45d-utilities\") pod \"9b4178e4-10a3-4011-8994-b7ca6f64b45d\" (UID: \"9b4178e4-10a3-4011-8994-b7ca6f64b45d\") " Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.484090 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b4178e4-10a3-4011-8994-b7ca6f64b45d-catalog-content\") pod \"9b4178e4-10a3-4011-8994-b7ca6f64b45d\" (UID: \"9b4178e4-10a3-4011-8994-b7ca6f64b45d\") " Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.487340 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b4178e4-10a3-4011-8994-b7ca6f64b45d-utilities" (OuterVolumeSpecName: "utilities") pod "9b4178e4-10a3-4011-8994-b7ca6f64b45d" (UID: "9b4178e4-10a3-4011-8994-b7ca6f64b45d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.504182 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b4178e4-10a3-4011-8994-b7ca6f64b45d-kube-api-access-lpjsg" (OuterVolumeSpecName: "kube-api-access-lpjsg") pod "9b4178e4-10a3-4011-8994-b7ca6f64b45d" (UID: "9b4178e4-10a3-4011-8994-b7ca6f64b45d"). InnerVolumeSpecName "kube-api-access-lpjsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.543442 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b4178e4-10a3-4011-8994-b7ca6f64b45d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b4178e4-10a3-4011-8994-b7ca6f64b45d" (UID: "9b4178e4-10a3-4011-8994-b7ca6f64b45d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.585619 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpjsg\" (UniqueName: \"kubernetes.io/projected/9b4178e4-10a3-4011-8994-b7ca6f64b45d-kube-api-access-lpjsg\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.585818 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b4178e4-10a3-4011-8994-b7ca6f64b45d-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.585829 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b4178e4-10a3-4011-8994-b7ca6f64b45d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.665275 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snx4m" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.671076 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gj86" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.672454 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9xblc" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.686017 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2786ac62-1a3d-46fe-951c-be542f08bf55-catalog-content\") pod \"2786ac62-1a3d-46fe-951c-be542f08bf55\" (UID: \"2786ac62-1a3d-46fe-951c-be542f08bf55\") " Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.686044 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxtsx\" (UniqueName: \"kubernetes.io/projected/49519f1b-a5e0-4d0f-bf1d-d6927a8f0957-kube-api-access-mxtsx\") pod \"49519f1b-a5e0-4d0f-bf1d-d6927a8f0957\" (UID: \"49519f1b-a5e0-4d0f-bf1d-d6927a8f0957\") " Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.686067 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvz6r\" (UniqueName: \"kubernetes.io/projected/2786ac62-1a3d-46fe-951c-be542f08bf55-kube-api-access-pvz6r\") pod \"2786ac62-1a3d-46fe-951c-be542f08bf55\" (UID: \"2786ac62-1a3d-46fe-951c-be542f08bf55\") " Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.686094 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/656b901f-73ef-4d9d-adb1-a8db28382f48-utilities\") pod \"656b901f-73ef-4d9d-adb1-a8db28382f48\" (UID: \"656b901f-73ef-4d9d-adb1-a8db28382f48\") " Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.686110 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49519f1b-a5e0-4d0f-bf1d-d6927a8f0957-catalog-content\") pod \"49519f1b-a5e0-4d0f-bf1d-d6927a8f0957\" (UID: \"49519f1b-a5e0-4d0f-bf1d-d6927a8f0957\") " Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.686132 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2786ac62-1a3d-46fe-951c-be542f08bf55-utilities\") pod \"2786ac62-1a3d-46fe-951c-be542f08bf55\" (UID: \"2786ac62-1a3d-46fe-951c-be542f08bf55\") " Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.686150 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/656b901f-73ef-4d9d-adb1-a8db28382f48-catalog-content\") pod \"656b901f-73ef-4d9d-adb1-a8db28382f48\" (UID: \"656b901f-73ef-4d9d-adb1-a8db28382f48\") " Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.686166 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7s8r\" (UniqueName: \"kubernetes.io/projected/656b901f-73ef-4d9d-adb1-a8db28382f48-kube-api-access-r7s8r\") pod \"656b901f-73ef-4d9d-adb1-a8db28382f48\" (UID: \"656b901f-73ef-4d9d-adb1-a8db28382f48\") " Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.686181 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49519f1b-a5e0-4d0f-bf1d-d6927a8f0957-utilities\") pod \"49519f1b-a5e0-4d0f-bf1d-d6927a8f0957\" (UID: \"49519f1b-a5e0-4d0f-bf1d-d6927a8f0957\") " Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.688887 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/656b901f-73ef-4d9d-adb1-a8db28382f48-utilities" (OuterVolumeSpecName: "utilities") pod "656b901f-73ef-4d9d-adb1-a8db28382f48" (UID: "656b901f-73ef-4d9d-adb1-a8db28382f48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.688992 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2786ac62-1a3d-46fe-951c-be542f08bf55-utilities" (OuterVolumeSpecName: "utilities") pod "2786ac62-1a3d-46fe-951c-be542f08bf55" (UID: "2786ac62-1a3d-46fe-951c-be542f08bf55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.690827 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49519f1b-a5e0-4d0f-bf1d-d6927a8f0957-kube-api-access-mxtsx" (OuterVolumeSpecName: "kube-api-access-mxtsx") pod "49519f1b-a5e0-4d0f-bf1d-d6927a8f0957" (UID: "49519f1b-a5e0-4d0f-bf1d-d6927a8f0957"). InnerVolumeSpecName "kube-api-access-mxtsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.691384 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/656b901f-73ef-4d9d-adb1-a8db28382f48-kube-api-access-r7s8r" (OuterVolumeSpecName: "kube-api-access-r7s8r") pod "656b901f-73ef-4d9d-adb1-a8db28382f48" (UID: "656b901f-73ef-4d9d-adb1-a8db28382f48"). InnerVolumeSpecName "kube-api-access-r7s8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.693791 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2786ac62-1a3d-46fe-951c-be542f08bf55-kube-api-access-pvz6r" (OuterVolumeSpecName: "kube-api-access-pvz6r") pod "2786ac62-1a3d-46fe-951c-be542f08bf55" (UID: "2786ac62-1a3d-46fe-951c-be542f08bf55"). InnerVolumeSpecName "kube-api-access-pvz6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.698345 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gmlb4" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.700183 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49519f1b-a5e0-4d0f-bf1d-d6927a8f0957-utilities" (OuterVolumeSpecName: "utilities") pod "49519f1b-a5e0-4d0f-bf1d-d6927a8f0957" (UID: "49519f1b-a5e0-4d0f-bf1d-d6927a8f0957"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.720204 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/656b901f-73ef-4d9d-adb1-a8db28382f48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "656b901f-73ef-4d9d-adb1-a8db28382f48" (UID: "656b901f-73ef-4d9d-adb1-a8db28382f48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.755516 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49519f1b-a5e0-4d0f-bf1d-d6927a8f0957-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49519f1b-a5e0-4d0f-bf1d-d6927a8f0957" (UID: "49519f1b-a5e0-4d0f-bf1d-d6927a8f0957"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.776192 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2786ac62-1a3d-46fe-951c-be542f08bf55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2786ac62-1a3d-46fe-951c-be542f08bf55" (UID: "2786ac62-1a3d-46fe-951c-be542f08bf55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.786486 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/356c2f0d-a078-42e4-925a-e4f39864eb48-marketplace-trusted-ca\") pod \"356c2f0d-a078-42e4-925a-e4f39864eb48\" (UID: \"356c2f0d-a078-42e4-925a-e4f39864eb48\") " Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.786532 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/356c2f0d-a078-42e4-925a-e4f39864eb48-marketplace-operator-metrics\") pod \"356c2f0d-a078-42e4-925a-e4f39864eb48\" (UID: \"356c2f0d-a078-42e4-925a-e4f39864eb48\") " Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.786555 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t9z7\" (UniqueName: \"kubernetes.io/projected/356c2f0d-a078-42e4-925a-e4f39864eb48-kube-api-access-6t9z7\") pod \"356c2f0d-a078-42e4-925a-e4f39864eb48\" (UID: \"356c2f0d-a078-42e4-925a-e4f39864eb48\") " Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.786712 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvz6r\" (UniqueName: \"kubernetes.io/projected/2786ac62-1a3d-46fe-951c-be542f08bf55-kube-api-access-pvz6r\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.786724 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/656b901f-73ef-4d9d-adb1-a8db28382f48-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.786733 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49519f1b-a5e0-4d0f-bf1d-d6927a8f0957-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.786740 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2786ac62-1a3d-46fe-951c-be542f08bf55-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.786748 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/656b901f-73ef-4d9d-adb1-a8db28382f48-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.786757 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7s8r\" (UniqueName: \"kubernetes.io/projected/656b901f-73ef-4d9d-adb1-a8db28382f48-kube-api-access-r7s8r\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.786764 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49519f1b-a5e0-4d0f-bf1d-d6927a8f0957-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.786773 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2786ac62-1a3d-46fe-951c-be542f08bf55-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.786780 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxtsx\" (UniqueName: \"kubernetes.io/projected/49519f1b-a5e0-4d0f-bf1d-d6927a8f0957-kube-api-access-mxtsx\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.787373 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/356c2f0d-a078-42e4-925a-e4f39864eb48-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "356c2f0d-a078-42e4-925a-e4f39864eb48" (UID: "356c2f0d-a078-42e4-925a-e4f39864eb48"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.789028 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/356c2f0d-a078-42e4-925a-e4f39864eb48-kube-api-access-6t9z7" (OuterVolumeSpecName: "kube-api-access-6t9z7") pod "356c2f0d-a078-42e4-925a-e4f39864eb48" (UID: "356c2f0d-a078-42e4-925a-e4f39864eb48"). InnerVolumeSpecName "kube-api-access-6t9z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.789354 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/356c2f0d-a078-42e4-925a-e4f39864eb48-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "356c2f0d-a078-42e4-925a-e4f39864eb48" (UID: "356c2f0d-a078-42e4-925a-e4f39864eb48"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.874038 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fdddq"] Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.887314 4796 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/356c2f0d-a078-42e4-925a-e4f39864eb48-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.887480 4796 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/356c2f0d-a078-42e4-925a-e4f39864eb48-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:47 crc kubenswrapper[4796]: I1205 10:30:47.887549 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t9z7\" (UniqueName: \"kubernetes.io/projected/356c2f0d-a078-42e4-925a-e4f39864eb48-kube-api-access-6t9z7\") on node \"crc\" DevicePath \"\"" Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.244370 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fdddq" event={"ID":"c6b04e09-09cc-4ca6-a5bd-61a46535f226","Type":"ContainerStarted","Data":"b61d11e8a32fcb3514317dcd312b5439034ee2e286245ee8067b3b79d7081ea5"} Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.244579 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fdddq" event={"ID":"c6b04e09-09cc-4ca6-a5bd-61a46535f226","Type":"ContainerStarted","Data":"a7b209d5f5e36fa03152cd050edce38aa72c2feb784a859c4d3dc5a65d814195"} Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.244595 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fdddq" Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.246198 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gj86" event={"ID":"2786ac62-1a3d-46fe-951c-be542f08bf55","Type":"ContainerDied","Data":"42bd1ff35b2aad79a361c64120068e49361cae84a956ce2825f9c87d1c06b743"} Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.246243 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gj86" Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.246246 4796 scope.go:117] "RemoveContainer" containerID="8270e33358c8830e71bd2ed5044b195492f393282a16c3c044aee73060e751a9" Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.247130 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fdddq" Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.248893 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zptnp" event={"ID":"9b4178e4-10a3-4011-8994-b7ca6f64b45d","Type":"ContainerDied","Data":"144b4763b97f1b35f1ecb9d83b9619775887c27c44ee897ae5a620a4771b9c09"} Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.248938 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zptnp" Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.250293 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gmlb4" event={"ID":"356c2f0d-a078-42e4-925a-e4f39864eb48","Type":"ContainerDied","Data":"e4b1f48ef73739019deec84a57d71420d34209f91484a8e06dbc585c29aead36"} Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.250302 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gmlb4" Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.252360 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9xblc" Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.252393 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9xblc" event={"ID":"656b901f-73ef-4d9d-adb1-a8db28382f48","Type":"ContainerDied","Data":"f69508c305d32ed1564dac2a8c5830a3ce86c33f52fd1fbd7a1f599e593f5776"} Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.254128 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-snx4m" event={"ID":"49519f1b-a5e0-4d0f-bf1d-d6927a8f0957","Type":"ContainerDied","Data":"68281d0229588fdbd09c9684dd6c31af8d2d0684cb8293a9242bb9f08cdedbbb"} Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.254186 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-snx4m" Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.257075 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fdddq" podStartSLOduration=1.257059704 podStartE2EDuration="1.257059704s" podCreationTimestamp="2025-12-05 10:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:30:48.256988479 +0000 UTC m=+194.545094002" watchObservedRunningTime="2025-12-05 10:30:48.257059704 +0000 UTC m=+194.545165217" Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.267958 4796 scope.go:117] "RemoveContainer" containerID="e50cc919e537b034382656e8bd0e283cf501843e17775a4825932db2301ae888" Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.270816 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9xblc"] Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.273287 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9xblc"] Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.280191 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8gj86"] Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.284213 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8gj86"] Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.289297 4796 scope.go:117] "RemoveContainer" containerID="8bcbb941ffd8b14338c930f985a21260f87b1f104b58553328c2d19379a20070" Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.293180 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zptnp"] Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.297811 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zptnp"] Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.301889 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gmlb4"] Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.303818 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gmlb4"] Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.320330 4796 scope.go:117] "RemoveContainer" containerID="4bbd9db58b22a6fee8f59144defafcc2e06243e23093334fa25e1cd4bd2444b2" Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.328028 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-snx4m"] Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.329664 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-snx4m"] Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.331304 4796 scope.go:117] "RemoveContainer" containerID="7fe94cf5258efb11848b1c08cf51936081f74f79da7a72e2b99faba41cbda078" Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.341320 4796 scope.go:117] "RemoveContainer" containerID="15433742bd388422a2b410b19d79ce6d70d09a617e8b44135e4b918e1ee6d7d1" Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.353119 4796 scope.go:117] "RemoveContainer" containerID="0d6153913013fd54860ad502a25257bf3499bc1bb66acf947da50cb7f5bce740" Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.362637 4796 scope.go:117] "RemoveContainer" containerID="a71605dfe3217f3b15fc4e9076d36c1409b4eb1b3ff0e1059871bf3772406630" Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.370555 4796 scope.go:117] "RemoveContainer" containerID="5e98a7e96ed5574b9836d27c4d0e5b3d6c666a173dc618dea4280f5e3e0ae0ee" Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.380060 4796 scope.go:117] "RemoveContainer" containerID="3b1456a1a50a42141de95a4639c67f6897644d0b16b77b2353f4c43e12ef1945" Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.390878 4796 scope.go:117] "RemoveContainer" containerID="3ff3a937cabf9836c0a3e2de56134418d16bca907552095d3d9f295e02a04370" Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.399385 4796 scope.go:117] "RemoveContainer" containerID="960448d7b5c84ac47377a140ebe2b450e422a782335ad486601cc7b2a6cc9481" Dec 05 10:30:48 crc kubenswrapper[4796]: I1205 10:30:48.409048 4796 scope.go:117] "RemoveContainer" containerID="16a001201e88293b084dca9882cc2e113c456fba34cdbfe54b1c210e289c1fc8" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.235094 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p9cbq"] Dec 05 10:30:49 crc kubenswrapper[4796]: E1205 10:30:49.235246 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2786ac62-1a3d-46fe-951c-be542f08bf55" containerName="extract-content" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.235258 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2786ac62-1a3d-46fe-951c-be542f08bf55" containerName="extract-content" Dec 05 10:30:49 crc kubenswrapper[4796]: E1205 10:30:49.235268 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656b901f-73ef-4d9d-adb1-a8db28382f48" containerName="extract-utilities" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.235273 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="656b901f-73ef-4d9d-adb1-a8db28382f48" containerName="extract-utilities" Dec 05 10:30:49 crc kubenswrapper[4796]: E1205 10:30:49.235280 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b4178e4-10a3-4011-8994-b7ca6f64b45d" containerName="extract-content" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.235285 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4178e4-10a3-4011-8994-b7ca6f64b45d" containerName="extract-content" Dec 05 10:30:49 crc kubenswrapper[4796]: E1205 10:30:49.235293 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656b901f-73ef-4d9d-adb1-a8db28382f48" containerName="registry-server" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.235298 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="656b901f-73ef-4d9d-adb1-a8db28382f48" containerName="registry-server" Dec 05 10:30:49 crc kubenswrapper[4796]: E1205 10:30:49.235305 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49519f1b-a5e0-4d0f-bf1d-d6927a8f0957" containerName="registry-server" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.235311 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="49519f1b-a5e0-4d0f-bf1d-d6927a8f0957" containerName="registry-server" Dec 05 10:30:49 crc kubenswrapper[4796]: E1205 10:30:49.235317 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="356c2f0d-a078-42e4-925a-e4f39864eb48" containerName="marketplace-operator" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.235322 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="356c2f0d-a078-42e4-925a-e4f39864eb48" containerName="marketplace-operator" Dec 05 10:30:49 crc kubenswrapper[4796]: E1205 10:30:49.235328 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49519f1b-a5e0-4d0f-bf1d-d6927a8f0957" containerName="extract-content" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.235333 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="49519f1b-a5e0-4d0f-bf1d-d6927a8f0957" containerName="extract-content" Dec 05 10:30:49 crc kubenswrapper[4796]: E1205 10:30:49.235338 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2786ac62-1a3d-46fe-951c-be542f08bf55" containerName="extract-utilities" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.235343 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2786ac62-1a3d-46fe-951c-be542f08bf55" containerName="extract-utilities" Dec 05 10:30:49 crc kubenswrapper[4796]: E1205 10:30:49.235352 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b4178e4-10a3-4011-8994-b7ca6f64b45d" containerName="registry-server" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.235356 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4178e4-10a3-4011-8994-b7ca6f64b45d" containerName="registry-server" Dec 05 10:30:49 crc kubenswrapper[4796]: E1205 10:30:49.235363 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49519f1b-a5e0-4d0f-bf1d-d6927a8f0957" containerName="extract-utilities" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.235368 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="49519f1b-a5e0-4d0f-bf1d-d6927a8f0957" containerName="extract-utilities" Dec 05 10:30:49 crc kubenswrapper[4796]: E1205 10:30:49.235377 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656b901f-73ef-4d9d-adb1-a8db28382f48" containerName="extract-content" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.235382 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="656b901f-73ef-4d9d-adb1-a8db28382f48" containerName="extract-content" Dec 05 10:30:49 crc kubenswrapper[4796]: E1205 10:30:49.235393 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2786ac62-1a3d-46fe-951c-be542f08bf55" containerName="registry-server" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.235398 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2786ac62-1a3d-46fe-951c-be542f08bf55" containerName="registry-server" Dec 05 10:30:49 crc kubenswrapper[4796]: E1205 10:30:49.235419 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b4178e4-10a3-4011-8994-b7ca6f64b45d" containerName="extract-utilities" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.235425 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4178e4-10a3-4011-8994-b7ca6f64b45d" containerName="extract-utilities" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.235503 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b4178e4-10a3-4011-8994-b7ca6f64b45d" containerName="registry-server" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.235513 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="49519f1b-a5e0-4d0f-bf1d-d6927a8f0957" containerName="registry-server" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.235519 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="656b901f-73ef-4d9d-adb1-a8db28382f48" containerName="registry-server" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.235525 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="356c2f0d-a078-42e4-925a-e4f39864eb48" containerName="marketplace-operator" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.235532 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2786ac62-1a3d-46fe-951c-be542f08bf55" containerName="registry-server" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.236093 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9cbq" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.238258 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.242311 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p9cbq"] Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.401862 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b84a3c6f-5d02-4cc3-8644-14c368436983-catalog-content\") pod \"community-operators-p9cbq\" (UID: \"b84a3c6f-5d02-4cc3-8644-14c368436983\") " pod="openshift-marketplace/community-operators-p9cbq" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.401899 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9xw7\" (UniqueName: \"kubernetes.io/projected/b84a3c6f-5d02-4cc3-8644-14c368436983-kube-api-access-v9xw7\") pod \"community-operators-p9cbq\" (UID: \"b84a3c6f-5d02-4cc3-8644-14c368436983\") " pod="openshift-marketplace/community-operators-p9cbq" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.401938 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b84a3c6f-5d02-4cc3-8644-14c368436983-utilities\") pod \"community-operators-p9cbq\" (UID: \"b84a3c6f-5d02-4cc3-8644-14c368436983\") " pod="openshift-marketplace/community-operators-p9cbq" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.435738 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gklxw"] Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.437027 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gklxw" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.438416 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.443810 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gklxw"] Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.502790 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b84a3c6f-5d02-4cc3-8644-14c368436983-catalog-content\") pod \"community-operators-p9cbq\" (UID: \"b84a3c6f-5d02-4cc3-8644-14c368436983\") " pod="openshift-marketplace/community-operators-p9cbq" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.502829 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9xw7\" (UniqueName: \"kubernetes.io/projected/b84a3c6f-5d02-4cc3-8644-14c368436983-kube-api-access-v9xw7\") pod \"community-operators-p9cbq\" (UID: \"b84a3c6f-5d02-4cc3-8644-14c368436983\") " pod="openshift-marketplace/community-operators-p9cbq" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.502855 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b84a3c6f-5d02-4cc3-8644-14c368436983-utilities\") pod \"community-operators-p9cbq\" (UID: \"b84a3c6f-5d02-4cc3-8644-14c368436983\") " pod="openshift-marketplace/community-operators-p9cbq" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.503152 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b84a3c6f-5d02-4cc3-8644-14c368436983-catalog-content\") pod \"community-operators-p9cbq\" (UID: \"b84a3c6f-5d02-4cc3-8644-14c368436983\") " pod="openshift-marketplace/community-operators-p9cbq" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.503229 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b84a3c6f-5d02-4cc3-8644-14c368436983-utilities\") pod \"community-operators-p9cbq\" (UID: \"b84a3c6f-5d02-4cc3-8644-14c368436983\") " pod="openshift-marketplace/community-operators-p9cbq" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.517154 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9xw7\" (UniqueName: \"kubernetes.io/projected/b84a3c6f-5d02-4cc3-8644-14c368436983-kube-api-access-v9xw7\") pod \"community-operators-p9cbq\" (UID: \"b84a3c6f-5d02-4cc3-8644-14c368436983\") " pod="openshift-marketplace/community-operators-p9cbq" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.548003 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p9cbq" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.604736 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6dc75b2-1e17-4aeb-a328-b430bd9e33a7-catalog-content\") pod \"certified-operators-gklxw\" (UID: \"a6dc75b2-1e17-4aeb-a328-b430bd9e33a7\") " pod="openshift-marketplace/certified-operators-gklxw" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.604778 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht6fj\" (UniqueName: \"kubernetes.io/projected/a6dc75b2-1e17-4aeb-a328-b430bd9e33a7-kube-api-access-ht6fj\") pod \"certified-operators-gklxw\" (UID: \"a6dc75b2-1e17-4aeb-a328-b430bd9e33a7\") " pod="openshift-marketplace/certified-operators-gklxw" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.604798 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6dc75b2-1e17-4aeb-a328-b430bd9e33a7-utilities\") pod \"certified-operators-gklxw\" (UID: \"a6dc75b2-1e17-4aeb-a328-b430bd9e33a7\") " pod="openshift-marketplace/certified-operators-gklxw" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.706085 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6dc75b2-1e17-4aeb-a328-b430bd9e33a7-catalog-content\") pod \"certified-operators-gklxw\" (UID: \"a6dc75b2-1e17-4aeb-a328-b430bd9e33a7\") " pod="openshift-marketplace/certified-operators-gklxw" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.706138 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht6fj\" (UniqueName: \"kubernetes.io/projected/a6dc75b2-1e17-4aeb-a328-b430bd9e33a7-kube-api-access-ht6fj\") pod \"certified-operators-gklxw\" (UID: \"a6dc75b2-1e17-4aeb-a328-b430bd9e33a7\") " pod="openshift-marketplace/certified-operators-gklxw" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.706157 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6dc75b2-1e17-4aeb-a328-b430bd9e33a7-utilities\") pod \"certified-operators-gklxw\" (UID: \"a6dc75b2-1e17-4aeb-a328-b430bd9e33a7\") " pod="openshift-marketplace/certified-operators-gklxw" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.706853 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6dc75b2-1e17-4aeb-a328-b430bd9e33a7-catalog-content\") pod \"certified-operators-gklxw\" (UID: \"a6dc75b2-1e17-4aeb-a328-b430bd9e33a7\") " pod="openshift-marketplace/certified-operators-gklxw" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.706978 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6dc75b2-1e17-4aeb-a328-b430bd9e33a7-utilities\") pod \"certified-operators-gklxw\" (UID: \"a6dc75b2-1e17-4aeb-a328-b430bd9e33a7\") " pod="openshift-marketplace/certified-operators-gklxw" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.719297 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht6fj\" (UniqueName: \"kubernetes.io/projected/a6dc75b2-1e17-4aeb-a328-b430bd9e33a7-kube-api-access-ht6fj\") pod \"certified-operators-gklxw\" (UID: \"a6dc75b2-1e17-4aeb-a328-b430bd9e33a7\") " pod="openshift-marketplace/certified-operators-gklxw" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.747713 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gklxw" Dec 05 10:30:49 crc kubenswrapper[4796]: I1205 10:30:49.876641 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p9cbq"] Dec 05 10:30:50 crc kubenswrapper[4796]: I1205 10:30:50.038646 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2786ac62-1a3d-46fe-951c-be542f08bf55" path="/var/lib/kubelet/pods/2786ac62-1a3d-46fe-951c-be542f08bf55/volumes" Dec 05 10:30:50 crc kubenswrapper[4796]: I1205 10:30:50.039623 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="356c2f0d-a078-42e4-925a-e4f39864eb48" path="/var/lib/kubelet/pods/356c2f0d-a078-42e4-925a-e4f39864eb48/volumes" Dec 05 10:30:50 crc kubenswrapper[4796]: I1205 10:30:50.040153 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49519f1b-a5e0-4d0f-bf1d-d6927a8f0957" path="/var/lib/kubelet/pods/49519f1b-a5e0-4d0f-bf1d-d6927a8f0957/volumes" Dec 05 10:30:50 crc kubenswrapper[4796]: I1205 10:30:50.041141 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="656b901f-73ef-4d9d-adb1-a8db28382f48" path="/var/lib/kubelet/pods/656b901f-73ef-4d9d-adb1-a8db28382f48/volumes" Dec 05 10:30:50 crc kubenswrapper[4796]: I1205 10:30:50.041663 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b4178e4-10a3-4011-8994-b7ca6f64b45d" path="/var/lib/kubelet/pods/9b4178e4-10a3-4011-8994-b7ca6f64b45d/volumes" Dec 05 10:30:50 crc kubenswrapper[4796]: I1205 10:30:50.071250 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gklxw"] Dec 05 10:30:50 crc kubenswrapper[4796]: W1205 10:30:50.075326 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6dc75b2_1e17_4aeb_a328_b430bd9e33a7.slice/crio-451ebebdd404c759abf0c41559fc00cdf1b818b243b576b1feb58605bf48f002 WatchSource:0}: Error finding container 451ebebdd404c759abf0c41559fc00cdf1b818b243b576b1feb58605bf48f002: Status 404 returned error can't find the container with id 451ebebdd404c759abf0c41559fc00cdf1b818b243b576b1feb58605bf48f002 Dec 05 10:30:50 crc kubenswrapper[4796]: I1205 10:30:50.267785 4796 generic.go:334] "Generic (PLEG): container finished" podID="b84a3c6f-5d02-4cc3-8644-14c368436983" containerID="d61027dffa4f9e1b643c11814e23051c9e4710478c22810e737fb94b214965d1" exitCode=0 Dec 05 10:30:50 crc kubenswrapper[4796]: I1205 10:30:50.267838 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9cbq" event={"ID":"b84a3c6f-5d02-4cc3-8644-14c368436983","Type":"ContainerDied","Data":"d61027dffa4f9e1b643c11814e23051c9e4710478c22810e737fb94b214965d1"} Dec 05 10:30:50 crc kubenswrapper[4796]: I1205 10:30:50.267860 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9cbq" event={"ID":"b84a3c6f-5d02-4cc3-8644-14c368436983","Type":"ContainerStarted","Data":"269af380fea0b207e395d21513a698dc4ff957602d351a460e41cfda69541008"} Dec 05 10:30:50 crc kubenswrapper[4796]: I1205 10:30:50.272515 4796 generic.go:334] "Generic (PLEG): container finished" podID="a6dc75b2-1e17-4aeb-a328-b430bd9e33a7" containerID="1e42d182b61afedd392374937d032dfe7d93540abb7d35c4514665502ca1c241" exitCode=0 Dec 05 10:30:50 crc kubenswrapper[4796]: I1205 10:30:50.272905 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gklxw" event={"ID":"a6dc75b2-1e17-4aeb-a328-b430bd9e33a7","Type":"ContainerDied","Data":"1e42d182b61afedd392374937d032dfe7d93540abb7d35c4514665502ca1c241"} Dec 05 10:30:50 crc kubenswrapper[4796]: I1205 10:30:50.272931 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gklxw" event={"ID":"a6dc75b2-1e17-4aeb-a328-b430bd9e33a7","Type":"ContainerStarted","Data":"451ebebdd404c759abf0c41559fc00cdf1b818b243b576b1feb58605bf48f002"} Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.277612 4796 generic.go:334] "Generic (PLEG): container finished" podID="b84a3c6f-5d02-4cc3-8644-14c368436983" containerID="d504ddcce12f5c5b2c21f7659baaa3235a82c01e699bcf4770b7e9e21073d617" exitCode=0 Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.277697 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9cbq" event={"ID":"b84a3c6f-5d02-4cc3-8644-14c368436983","Type":"ContainerDied","Data":"d504ddcce12f5c5b2c21f7659baaa3235a82c01e699bcf4770b7e9e21073d617"} Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.280558 4796 generic.go:334] "Generic (PLEG): container finished" podID="a6dc75b2-1e17-4aeb-a328-b430bd9e33a7" containerID="98c72ae9fa8abccd34295b6a97c7fd1f224902e55b4c667f2345d1376929b429" exitCode=0 Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.280593 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gklxw" event={"ID":"a6dc75b2-1e17-4aeb-a328-b430bd9e33a7","Type":"ContainerDied","Data":"98c72ae9fa8abccd34295b6a97c7fd1f224902e55b4c667f2345d1376929b429"} Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.639004 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qvw7s"] Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.640422 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvw7s" Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.643525 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvw7s"] Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.644455 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.730240 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6n9f\" (UniqueName: \"kubernetes.io/projected/cc02dcd1-0779-4692-8d9a-78bd5cefa3ea-kube-api-access-j6n9f\") pod \"redhat-marketplace-qvw7s\" (UID: \"cc02dcd1-0779-4692-8d9a-78bd5cefa3ea\") " pod="openshift-marketplace/redhat-marketplace-qvw7s" Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.730313 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc02dcd1-0779-4692-8d9a-78bd5cefa3ea-catalog-content\") pod \"redhat-marketplace-qvw7s\" (UID: \"cc02dcd1-0779-4692-8d9a-78bd5cefa3ea\") " pod="openshift-marketplace/redhat-marketplace-qvw7s" Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.730339 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc02dcd1-0779-4692-8d9a-78bd5cefa3ea-utilities\") pod \"redhat-marketplace-qvw7s\" (UID: \"cc02dcd1-0779-4692-8d9a-78bd5cefa3ea\") " pod="openshift-marketplace/redhat-marketplace-qvw7s" Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.830970 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6n9f\" (UniqueName: \"kubernetes.io/projected/cc02dcd1-0779-4692-8d9a-78bd5cefa3ea-kube-api-access-j6n9f\") pod \"redhat-marketplace-qvw7s\" (UID: \"cc02dcd1-0779-4692-8d9a-78bd5cefa3ea\") " pod="openshift-marketplace/redhat-marketplace-qvw7s" Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.831026 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc02dcd1-0779-4692-8d9a-78bd5cefa3ea-catalog-content\") pod \"redhat-marketplace-qvw7s\" (UID: \"cc02dcd1-0779-4692-8d9a-78bd5cefa3ea\") " pod="openshift-marketplace/redhat-marketplace-qvw7s" Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.831048 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc02dcd1-0779-4692-8d9a-78bd5cefa3ea-utilities\") pod \"redhat-marketplace-qvw7s\" (UID: \"cc02dcd1-0779-4692-8d9a-78bd5cefa3ea\") " pod="openshift-marketplace/redhat-marketplace-qvw7s" Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.831362 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc02dcd1-0779-4692-8d9a-78bd5cefa3ea-utilities\") pod \"redhat-marketplace-qvw7s\" (UID: \"cc02dcd1-0779-4692-8d9a-78bd5cefa3ea\") " pod="openshift-marketplace/redhat-marketplace-qvw7s" Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.831568 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc02dcd1-0779-4692-8d9a-78bd5cefa3ea-catalog-content\") pod \"redhat-marketplace-qvw7s\" (UID: \"cc02dcd1-0779-4692-8d9a-78bd5cefa3ea\") " pod="openshift-marketplace/redhat-marketplace-qvw7s" Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.839699 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fj572"] Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.840524 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fj572" Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.842326 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.843917 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fj572"] Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.854494 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6n9f\" (UniqueName: \"kubernetes.io/projected/cc02dcd1-0779-4692-8d9a-78bd5cefa3ea-kube-api-access-j6n9f\") pod \"redhat-marketplace-qvw7s\" (UID: \"cc02dcd1-0779-4692-8d9a-78bd5cefa3ea\") " pod="openshift-marketplace/redhat-marketplace-qvw7s" Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.933361 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05818125-1b72-4058-ace6-04c114506db0-catalog-content\") pod \"redhat-operators-fj572\" (UID: \"05818125-1b72-4058-ace6-04c114506db0\") " pod="openshift-marketplace/redhat-operators-fj572" Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.933420 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05818125-1b72-4058-ace6-04c114506db0-utilities\") pod \"redhat-operators-fj572\" (UID: \"05818125-1b72-4058-ace6-04c114506db0\") " pod="openshift-marketplace/redhat-operators-fj572" Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.933545 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffb2k\" (UniqueName: \"kubernetes.io/projected/05818125-1b72-4058-ace6-04c114506db0-kube-api-access-ffb2k\") pod \"redhat-operators-fj572\" (UID: \"05818125-1b72-4058-ace6-04c114506db0\") " pod="openshift-marketplace/redhat-operators-fj572" Dec 05 10:30:51 crc kubenswrapper[4796]: I1205 10:30:51.954765 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvw7s" Dec 05 10:30:52 crc kubenswrapper[4796]: I1205 10:30:52.034387 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffb2k\" (UniqueName: \"kubernetes.io/projected/05818125-1b72-4058-ace6-04c114506db0-kube-api-access-ffb2k\") pod \"redhat-operators-fj572\" (UID: \"05818125-1b72-4058-ace6-04c114506db0\") " pod="openshift-marketplace/redhat-operators-fj572" Dec 05 10:30:52 crc kubenswrapper[4796]: I1205 10:30:52.034447 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05818125-1b72-4058-ace6-04c114506db0-catalog-content\") pod \"redhat-operators-fj572\" (UID: \"05818125-1b72-4058-ace6-04c114506db0\") " pod="openshift-marketplace/redhat-operators-fj572" Dec 05 10:30:52 crc kubenswrapper[4796]: I1205 10:30:52.034466 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05818125-1b72-4058-ace6-04c114506db0-utilities\") pod \"redhat-operators-fj572\" (UID: \"05818125-1b72-4058-ace6-04c114506db0\") " pod="openshift-marketplace/redhat-operators-fj572" Dec 05 10:30:52 crc kubenswrapper[4796]: I1205 10:30:52.035887 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05818125-1b72-4058-ace6-04c114506db0-catalog-content\") pod \"redhat-operators-fj572\" (UID: \"05818125-1b72-4058-ace6-04c114506db0\") " pod="openshift-marketplace/redhat-operators-fj572" Dec 05 10:30:52 crc kubenswrapper[4796]: I1205 10:30:52.036005 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05818125-1b72-4058-ace6-04c114506db0-utilities\") pod \"redhat-operators-fj572\" (UID: \"05818125-1b72-4058-ace6-04c114506db0\") " pod="openshift-marketplace/redhat-operators-fj572" Dec 05 10:30:52 crc kubenswrapper[4796]: I1205 10:30:52.060475 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffb2k\" (UniqueName: \"kubernetes.io/projected/05818125-1b72-4058-ace6-04c114506db0-kube-api-access-ffb2k\") pod \"redhat-operators-fj572\" (UID: \"05818125-1b72-4058-ace6-04c114506db0\") " pod="openshift-marketplace/redhat-operators-fj572" Dec 05 10:30:52 crc kubenswrapper[4796]: I1205 10:30:52.183274 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fj572" Dec 05 10:30:52 crc kubenswrapper[4796]: I1205 10:30:52.294429 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gklxw" event={"ID":"a6dc75b2-1e17-4aeb-a328-b430bd9e33a7","Type":"ContainerStarted","Data":"319793dc2b956b41f00712c6fa785801e7ffccf10d27fa2990ad2c9ce26093f1"} Dec 05 10:30:52 crc kubenswrapper[4796]: I1205 10:30:52.305836 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p9cbq" event={"ID":"b84a3c6f-5d02-4cc3-8644-14c368436983","Type":"ContainerStarted","Data":"a5ca5db7d79ba85cdbbea8c2f0534546584bdef36b522ea78d411a99b0855d72"} Dec 05 10:30:52 crc kubenswrapper[4796]: I1205 10:30:52.323729 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gklxw" podStartSLOduration=1.847371799 podStartE2EDuration="3.323711996s" podCreationTimestamp="2025-12-05 10:30:49 +0000 UTC" firstStartedPulling="2025-12-05 10:30:50.273860086 +0000 UTC m=+196.561965599" lastFinishedPulling="2025-12-05 10:30:51.750200283 +0000 UTC m=+198.038305796" observedRunningTime="2025-12-05 10:30:52.318071432 +0000 UTC m=+198.606176946" watchObservedRunningTime="2025-12-05 10:30:52.323711996 +0000 UTC m=+198.611817508" Dec 05 10:30:52 crc kubenswrapper[4796]: I1205 10:30:52.341836 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p9cbq" podStartSLOduration=1.847653882 podStartE2EDuration="3.341821014s" podCreationTimestamp="2025-12-05 10:30:49 +0000 UTC" firstStartedPulling="2025-12-05 10:30:50.269245663 +0000 UTC m=+196.557351176" lastFinishedPulling="2025-12-05 10:30:51.763412796 +0000 UTC m=+198.051518308" observedRunningTime="2025-12-05 10:30:52.334365213 +0000 UTC m=+198.622470715" watchObservedRunningTime="2025-12-05 10:30:52.341821014 +0000 UTC m=+198.629926527" Dec 05 10:30:52 crc kubenswrapper[4796]: I1205 10:30:52.344497 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvw7s"] Dec 05 10:30:52 crc kubenswrapper[4796]: W1205 10:30:52.347949 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc02dcd1_0779_4692_8d9a_78bd5cefa3ea.slice/crio-1e73be5f5ebca3cff573b7b1f1dec6e997b07be8344dc8351ea63d3262baadb8 WatchSource:0}: Error finding container 1e73be5f5ebca3cff573b7b1f1dec6e997b07be8344dc8351ea63d3262baadb8: Status 404 returned error can't find the container with id 1e73be5f5ebca3cff573b7b1f1dec6e997b07be8344dc8351ea63d3262baadb8 Dec 05 10:30:52 crc kubenswrapper[4796]: I1205 10:30:52.549548 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fj572"] Dec 05 10:30:53 crc kubenswrapper[4796]: I1205 10:30:53.310911 4796 generic.go:334] "Generic (PLEG): container finished" podID="cc02dcd1-0779-4692-8d9a-78bd5cefa3ea" containerID="cf6c0a9197a9b10894dc9ed119ed1f920f0b844b041d0e865e61b34e183d9719" exitCode=0 Dec 05 10:30:53 crc kubenswrapper[4796]: I1205 10:30:53.311017 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvw7s" event={"ID":"cc02dcd1-0779-4692-8d9a-78bd5cefa3ea","Type":"ContainerDied","Data":"cf6c0a9197a9b10894dc9ed119ed1f920f0b844b041d0e865e61b34e183d9719"} Dec 05 10:30:53 crc kubenswrapper[4796]: I1205 10:30:53.311238 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvw7s" event={"ID":"cc02dcd1-0779-4692-8d9a-78bd5cefa3ea","Type":"ContainerStarted","Data":"1e73be5f5ebca3cff573b7b1f1dec6e997b07be8344dc8351ea63d3262baadb8"} Dec 05 10:30:53 crc kubenswrapper[4796]: I1205 10:30:53.312551 4796 generic.go:334] "Generic (PLEG): container finished" podID="05818125-1b72-4058-ace6-04c114506db0" containerID="4ea7b3c50634fb9a7780ac331fca86e7249c5d483144d534d0ec2612edd34145" exitCode=0 Dec 05 10:30:53 crc kubenswrapper[4796]: I1205 10:30:53.312574 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fj572" event={"ID":"05818125-1b72-4058-ace6-04c114506db0","Type":"ContainerDied","Data":"4ea7b3c50634fb9a7780ac331fca86e7249c5d483144d534d0ec2612edd34145"} Dec 05 10:30:53 crc kubenswrapper[4796]: I1205 10:30:53.312599 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fj572" event={"ID":"05818125-1b72-4058-ace6-04c114506db0","Type":"ContainerStarted","Data":"e407faccf54d6fc29548d69f2c2cdd125c118877518bc9053a57a27e47774fe3"} Dec 05 10:30:55 crc kubenswrapper[4796]: I1205 10:30:55.325486 4796 generic.go:334] "Generic (PLEG): container finished" podID="cc02dcd1-0779-4692-8d9a-78bd5cefa3ea" containerID="339e3eb85df902152de59bd8d0a26f4d1d96a2c5476857468a7edfb131060dce" exitCode=0 Dec 05 10:30:55 crc kubenswrapper[4796]: I1205 10:30:55.325583 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvw7s" event={"ID":"cc02dcd1-0779-4692-8d9a-78bd5cefa3ea","Type":"ContainerDied","Data":"339e3eb85df902152de59bd8d0a26f4d1d96a2c5476857468a7edfb131060dce"} Dec 05 10:30:55 crc kubenswrapper[4796]: I1205 10:30:55.328004 4796 generic.go:334] "Generic (PLEG): container finished" podID="05818125-1b72-4058-ace6-04c114506db0" containerID="cda78b46396072e59ad152ee4bbea120845db52efc4b24fd31918c73bfa7f434" exitCode=0 Dec 05 10:30:55 crc kubenswrapper[4796]: I1205 10:30:55.328093 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fj572" event={"ID":"05818125-1b72-4058-ace6-04c114506db0","Type":"ContainerDied","Data":"cda78b46396072e59ad152ee4bbea120845db52efc4b24fd31918c73bfa7f434"} Dec 05 10:30:56 crc kubenswrapper[4796]: I1205 10:30:56.335185 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvw7s" event={"ID":"cc02dcd1-0779-4692-8d9a-78bd5cefa3ea","Type":"ContainerStarted","Data":"b4df9fc338564f65728c8806bd18ca6f0d6bf65e758aed458835ff7945eee089"} Dec 05 10:30:56 crc kubenswrapper[4796]: I1205 10:30:56.337622 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fj572" event={"ID":"05818125-1b72-4058-ace6-04c114506db0","Type":"ContainerStarted","Data":"651e591f48fe497821237c537fa90767397a8a82a4328688b47b99d8a36b52ca"} Dec 05 10:30:56 crc kubenswrapper[4796]: I1205 10:30:56.348297 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qvw7s" podStartSLOduration=2.834833396 podStartE2EDuration="5.348281753s" podCreationTimestamp="2025-12-05 10:30:51 +0000 UTC" firstStartedPulling="2025-12-05 10:30:53.312286255 +0000 UTC m=+199.600391768" lastFinishedPulling="2025-12-05 10:30:55.825734602 +0000 UTC m=+202.113840125" observedRunningTime="2025-12-05 10:30:56.347401711 +0000 UTC m=+202.635507223" watchObservedRunningTime="2025-12-05 10:30:56.348281753 +0000 UTC m=+202.636387266" Dec 05 10:30:56 crc kubenswrapper[4796]: I1205 10:30:56.363584 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fj572" podStartSLOduration=2.87768956 podStartE2EDuration="5.363569794s" podCreationTimestamp="2025-12-05 10:30:51 +0000 UTC" firstStartedPulling="2025-12-05 10:30:53.31344994 +0000 UTC m=+199.601555454" lastFinishedPulling="2025-12-05 10:30:55.799330174 +0000 UTC m=+202.087435688" observedRunningTime="2025-12-05 10:30:56.360959722 +0000 UTC m=+202.649065256" watchObservedRunningTime="2025-12-05 10:30:56.363569794 +0000 UTC m=+202.651675307" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.548993 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p9cbq" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.549306 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p9cbq" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.576567 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p9cbq" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.744803 4796 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.745299 4796 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.745376 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.745633 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf" gracePeriod=15 Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.745648 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b" gracePeriod=15 Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.745709 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361" gracePeriod=15 Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.745772 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9" gracePeriod=15 Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.745777 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2" gracePeriod=15 Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.746253 4796 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 10:30:59 crc kubenswrapper[4796]: E1205 10:30:59.746438 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.746449 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 10:30:59 crc kubenswrapper[4796]: E1205 10:30:59.746455 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.746460 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 05 10:30:59 crc kubenswrapper[4796]: E1205 10:30:59.746467 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.746472 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 05 10:30:59 crc kubenswrapper[4796]: E1205 10:30:59.746478 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.746484 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 10:30:59 crc kubenswrapper[4796]: E1205 10:30:59.746490 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.746495 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 05 10:30:59 crc kubenswrapper[4796]: E1205 10:30:59.746505 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.746510 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 05 10:30:59 crc kubenswrapper[4796]: E1205 10:30:59.746516 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.746522 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.746595 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.746603 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.746610 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.746615 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.746622 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.746784 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.748450 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gklxw" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.748478 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gklxw" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.776207 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.783079 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gklxw" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.815538 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.815591 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.815671 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.916270 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.916304 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.916335 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.916357 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.916376 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.916413 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.916417 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.916466 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.916510 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.916544 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:30:59 crc kubenswrapper[4796]: I1205 10:30:59.916560 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.017548 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.017596 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.017622 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.017657 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.017709 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.017735 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.017748 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.017762 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.017793 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.017803 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.068064 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 10:31:00 crc kubenswrapper[4796]: W1205 10:31:00.081390 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-d64bb001703d5352d8f86c724d37286bb46c5eb4df37e0620374c52f3a104305 WatchSource:0}: Error finding container d64bb001703d5352d8f86c724d37286bb46c5eb4df37e0620374c52f3a104305: Status 404 returned error can't find the container with id d64bb001703d5352d8f86c724d37286bb46c5eb4df37e0620374c52f3a104305 Dec 05 10:31:00 crc kubenswrapper[4796]: E1205 10:31:00.083201 4796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.25.20:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187e4b15976ded58 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-05 10:31:00.082773336 +0000 UTC m=+206.370878849,LastTimestamp:2025-12-05 10:31:00.082773336 +0000 UTC m=+206.370878849,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.355668 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.356867 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.357485 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf" exitCode=0 Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.357513 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361" exitCode=0 Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.357520 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b" exitCode=0 Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.357527 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2" exitCode=2 Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.357582 4796 scope.go:117] "RemoveContainer" containerID="3f356ee59cd44263032aa85c6df3394d63f9e2376b928a456c7f06b7ccdf59ea" Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.358473 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"68d41a818190d5ee6efbd83547ba2bd5779b6e8f1d9c698ad11921e5ba72382a"} Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.358507 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d64bb001703d5352d8f86c724d37286bb46c5eb4df37e0620374c52f3a104305"} Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.361721 4796 generic.go:334] "Generic (PLEG): container finished" podID="35c6cfc7-cda9-4c88-9354-27745015055f" containerID="bca6bb00e8b083e80c3244608885a967b09402120d3b699675b3d1a67166933d" exitCode=0 Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.361797 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"35c6cfc7-cda9-4c88-9354-27745015055f","Type":"ContainerDied","Data":"bca6bb00e8b083e80c3244608885a967b09402120d3b699675b3d1a67166933d"} Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.387645 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p9cbq" Dec 05 10:31:00 crc kubenswrapper[4796]: I1205 10:31:00.389980 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gklxw" Dec 05 10:31:01 crc kubenswrapper[4796]: I1205 10:31:01.367772 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 05 10:31:01 crc kubenswrapper[4796]: I1205 10:31:01.628013 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 05 10:31:01 crc kubenswrapper[4796]: I1205 10:31:01.632543 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35c6cfc7-cda9-4c88-9354-27745015055f-kube-api-access\") pod \"35c6cfc7-cda9-4c88-9354-27745015055f\" (UID: \"35c6cfc7-cda9-4c88-9354-27745015055f\") " Dec 05 10:31:01 crc kubenswrapper[4796]: I1205 10:31:01.632591 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/35c6cfc7-cda9-4c88-9354-27745015055f-var-lock\") pod \"35c6cfc7-cda9-4c88-9354-27745015055f\" (UID: \"35c6cfc7-cda9-4c88-9354-27745015055f\") " Dec 05 10:31:01 crc kubenswrapper[4796]: I1205 10:31:01.632734 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35c6cfc7-cda9-4c88-9354-27745015055f-var-lock" (OuterVolumeSpecName: "var-lock") pod "35c6cfc7-cda9-4c88-9354-27745015055f" (UID: "35c6cfc7-cda9-4c88-9354-27745015055f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:31:01 crc kubenswrapper[4796]: I1205 10:31:01.632922 4796 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/35c6cfc7-cda9-4c88-9354-27745015055f-var-lock\") on node \"crc\" DevicePath \"\"" Dec 05 10:31:01 crc kubenswrapper[4796]: I1205 10:31:01.635778 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35c6cfc7-cda9-4c88-9354-27745015055f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "35c6cfc7-cda9-4c88-9354-27745015055f" (UID: "35c6cfc7-cda9-4c88-9354-27745015055f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:31:01 crc kubenswrapper[4796]: I1205 10:31:01.733850 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35c6cfc7-cda9-4c88-9354-27745015055f-kubelet-dir\") pod \"35c6cfc7-cda9-4c88-9354-27745015055f\" (UID: \"35c6cfc7-cda9-4c88-9354-27745015055f\") " Dec 05 10:31:01 crc kubenswrapper[4796]: I1205 10:31:01.733940 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35c6cfc7-cda9-4c88-9354-27745015055f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "35c6cfc7-cda9-4c88-9354-27745015055f" (UID: "35c6cfc7-cda9-4c88-9354-27745015055f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:31:01 crc kubenswrapper[4796]: I1205 10:31:01.734061 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35c6cfc7-cda9-4c88-9354-27745015055f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 05 10:31:01 crc kubenswrapper[4796]: I1205 10:31:01.734074 4796 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35c6cfc7-cda9-4c88-9354-27745015055f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 05 10:31:01 crc kubenswrapper[4796]: I1205 10:31:01.955103 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qvw7s" Dec 05 10:31:01 crc kubenswrapper[4796]: I1205 10:31:01.955147 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qvw7s" Dec 05 10:31:01 crc kubenswrapper[4796]: I1205 10:31:01.984151 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qvw7s" Dec 05 10:31:02 crc kubenswrapper[4796]: I1205 10:31:02.183537 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fj572" Dec 05 10:31:02 crc kubenswrapper[4796]: I1205 10:31:02.183562 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fj572" Dec 05 10:31:02 crc kubenswrapper[4796]: I1205 10:31:02.207495 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fj572" Dec 05 10:31:02 crc kubenswrapper[4796]: I1205 10:31:02.373801 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"35c6cfc7-cda9-4c88-9354-27745015055f","Type":"ContainerDied","Data":"9d2d487bce3b8c30c791fbeedb6908d3ec816d7f16e89d7b40940797076357c6"} Dec 05 10:31:02 crc kubenswrapper[4796]: I1205 10:31:02.373838 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d2d487bce3b8c30c791fbeedb6908d3ec816d7f16e89d7b40940797076357c6" Dec 05 10:31:02 crc kubenswrapper[4796]: I1205 10:31:02.373964 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 05 10:31:02 crc kubenswrapper[4796]: I1205 10:31:02.398515 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fj572" Dec 05 10:31:02 crc kubenswrapper[4796]: I1205 10:31:02.398851 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qvw7s" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.119170 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.122566 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.251164 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.251213 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.251271 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.251273 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.251351 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.251408 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.251674 4796 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.251711 4796 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.251722 4796 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.379283 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.379908 4796 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9" exitCode=0 Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.379972 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.380005 4796 scope.go:117] "RemoveContainer" containerID="d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.390658 4796 scope.go:117] "RemoveContainer" containerID="ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.401471 4796 scope.go:117] "RemoveContainer" containerID="d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.410972 4796 scope.go:117] "RemoveContainer" containerID="ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.419709 4796 scope.go:117] "RemoveContainer" containerID="ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.428877 4796 scope.go:117] "RemoveContainer" containerID="a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.441309 4796 scope.go:117] "RemoveContainer" containerID="d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf" Dec 05 10:31:03 crc kubenswrapper[4796]: E1205 10:31:03.441625 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\": container with ID starting with d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf not found: ID does not exist" containerID="d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.441658 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf"} err="failed to get container status \"d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\": rpc error: code = NotFound desc = could not find container \"d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf\": container with ID starting with d36f0bf5bdd5520c02264df33abfc7b6c5e3b5f68fabe9f0dd8319386c4d13bf not found: ID does not exist" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.441695 4796 scope.go:117] "RemoveContainer" containerID="ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361" Dec 05 10:31:03 crc kubenswrapper[4796]: E1205 10:31:03.441966 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\": container with ID starting with ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361 not found: ID does not exist" containerID="ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.441995 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361"} err="failed to get container status \"ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\": rpc error: code = NotFound desc = could not find container \"ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361\": container with ID starting with ee3ba7d59d991c7481e9c00566554f77770a16744cbf16e73f1c96cae6f9d361 not found: ID does not exist" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.442030 4796 scope.go:117] "RemoveContainer" containerID="d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b" Dec 05 10:31:03 crc kubenswrapper[4796]: E1205 10:31:03.442270 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\": container with ID starting with d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b not found: ID does not exist" containerID="d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.442297 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b"} err="failed to get container status \"d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\": rpc error: code = NotFound desc = could not find container \"d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b\": container with ID starting with d90f607227c86e2bb42a7c7ee309fee89235c26420175a565f1ec21d1a82f92b not found: ID does not exist" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.442319 4796 scope.go:117] "RemoveContainer" containerID="ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2" Dec 05 10:31:03 crc kubenswrapper[4796]: E1205 10:31:03.442600 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\": container with ID starting with ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2 not found: ID does not exist" containerID="ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.442619 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2"} err="failed to get container status \"ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\": rpc error: code = NotFound desc = could not find container \"ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2\": container with ID starting with ad6024da3f4e51f3a41dc01bd7a635ded9d1dde9ed36a3d513a3e01ed20f20c2 not found: ID does not exist" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.442635 4796 scope.go:117] "RemoveContainer" containerID="ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9" Dec 05 10:31:03 crc kubenswrapper[4796]: E1205 10:31:03.442896 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\": container with ID starting with ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9 not found: ID does not exist" containerID="ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.442918 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9"} err="failed to get container status \"ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\": rpc error: code = NotFound desc = could not find container \"ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9\": container with ID starting with ac9884b5052c485e25a5f32cddb19455ea26bc05f7b5b2e8e795f4b68cea61e9 not found: ID does not exist" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.442949 4796 scope.go:117] "RemoveContainer" containerID="a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0" Dec 05 10:31:03 crc kubenswrapper[4796]: E1205 10:31:03.443164 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\": container with ID starting with a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0 not found: ID does not exist" containerID="a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0" Dec 05 10:31:03 crc kubenswrapper[4796]: I1205 10:31:03.443194 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0"} err="failed to get container status \"a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\": rpc error: code = NotFound desc = could not find container \"a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0\": container with ID starting with a24c462575c7cc390b21322894de6af3c97585ad2efe9615b4c4d694dd494cf0 not found: ID does not exist" Dec 05 10:31:04 crc kubenswrapper[4796]: I1205 10:31:04.035870 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 05 10:31:04 crc kubenswrapper[4796]: I1205 10:31:04.784065 4796 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:04 crc kubenswrapper[4796]: I1205 10:31:04.784561 4796 status_manager.go:851] "Failed to get status for pod" podUID="a6dc75b2-1e17-4aeb-a328-b430bd9e33a7" pod="openshift-marketplace/certified-operators-gklxw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gklxw\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:04 crc kubenswrapper[4796]: I1205 10:31:04.784841 4796 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:04 crc kubenswrapper[4796]: I1205 10:31:04.787090 4796 status_manager.go:851] "Failed to get status for pod" podUID="cc02dcd1-0779-4692-8d9a-78bd5cefa3ea" pod="openshift-marketplace/redhat-marketplace-qvw7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qvw7s\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:04 crc kubenswrapper[4796]: I1205 10:31:04.787267 4796 status_manager.go:851] "Failed to get status for pod" podUID="35c6cfc7-cda9-4c88-9354-27745015055f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:04 crc kubenswrapper[4796]: I1205 10:31:04.787435 4796 status_manager.go:851] "Failed to get status for pod" podUID="a6dc75b2-1e17-4aeb-a328-b430bd9e33a7" pod="openshift-marketplace/certified-operators-gklxw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gklxw\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:04 crc kubenswrapper[4796]: I1205 10:31:04.787583 4796 status_manager.go:851] "Failed to get status for pod" podUID="05818125-1b72-4058-ace6-04c114506db0" pod="openshift-marketplace/redhat-operators-fj572" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fj572\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:04 crc kubenswrapper[4796]: I1205 10:31:04.787777 4796 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:04 crc kubenswrapper[4796]: I1205 10:31:04.788021 4796 status_manager.go:851] "Failed to get status for pod" podUID="b84a3c6f-5d02-4cc3-8644-14c368436983" pod="openshift-marketplace/community-operators-p9cbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9cbq\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:04 crc kubenswrapper[4796]: I1205 10:31:04.788274 4796 status_manager.go:851] "Failed to get status for pod" podUID="cc02dcd1-0779-4692-8d9a-78bd5cefa3ea" pod="openshift-marketplace/redhat-marketplace-qvw7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qvw7s\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:04 crc kubenswrapper[4796]: I1205 10:31:04.788476 4796 status_manager.go:851] "Failed to get status for pod" podUID="35c6cfc7-cda9-4c88-9354-27745015055f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:04 crc kubenswrapper[4796]: I1205 10:31:04.788704 4796 status_manager.go:851] "Failed to get status for pod" podUID="a6dc75b2-1e17-4aeb-a328-b430bd9e33a7" pod="openshift-marketplace/certified-operators-gklxw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gklxw\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:04 crc kubenswrapper[4796]: I1205 10:31:04.789115 4796 status_manager.go:851] "Failed to get status for pod" podUID="05818125-1b72-4058-ace6-04c114506db0" pod="openshift-marketplace/redhat-operators-fj572" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fj572\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:04 crc kubenswrapper[4796]: I1205 10:31:04.789318 4796 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:04 crc kubenswrapper[4796]: I1205 10:31:04.789533 4796 status_manager.go:851] "Failed to get status for pod" podUID="b84a3c6f-5d02-4cc3-8644-14c368436983" pod="openshift-marketplace/community-operators-p9cbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9cbq\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:05 crc kubenswrapper[4796]: I1205 10:31:05.177473 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:31:05 crc kubenswrapper[4796]: I1205 10:31:05.177517 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:31:05 crc kubenswrapper[4796]: I1205 10:31:05.177555 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 10:31:05 crc kubenswrapper[4796]: I1205 10:31:05.177922 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add"} pod="openshift-machine-config-operator/machine-config-daemon-9pllw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 10:31:05 crc kubenswrapper[4796]: I1205 10:31:05.177977 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" containerID="cri-o://5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add" gracePeriod=600 Dec 05 10:31:05 crc kubenswrapper[4796]: I1205 10:31:05.388713 4796 generic.go:334] "Generic (PLEG): container finished" podID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerID="5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add" exitCode=0 Dec 05 10:31:05 crc kubenswrapper[4796]: I1205 10:31:05.388769 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerDied","Data":"5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add"} Dec 05 10:31:05 crc kubenswrapper[4796]: I1205 10:31:05.389026 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerStarted","Data":"b237acecfc6f07a912dd5d391cf00caaa99fd4cca591ed146b85426c5cf97464"} Dec 05 10:31:05 crc kubenswrapper[4796]: I1205 10:31:05.389486 4796 status_manager.go:851] "Failed to get status for pod" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-9pllw\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:05 crc kubenswrapper[4796]: I1205 10:31:05.389666 4796 status_manager.go:851] "Failed to get status for pod" podUID="cc02dcd1-0779-4692-8d9a-78bd5cefa3ea" pod="openshift-marketplace/redhat-marketplace-qvw7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qvw7s\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:05 crc kubenswrapper[4796]: I1205 10:31:05.389905 4796 status_manager.go:851] "Failed to get status for pod" podUID="35c6cfc7-cda9-4c88-9354-27745015055f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:05 crc kubenswrapper[4796]: I1205 10:31:05.390117 4796 status_manager.go:851] "Failed to get status for pod" podUID="a6dc75b2-1e17-4aeb-a328-b430bd9e33a7" pod="openshift-marketplace/certified-operators-gklxw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gklxw\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:05 crc kubenswrapper[4796]: I1205 10:31:05.390438 4796 status_manager.go:851] "Failed to get status for pod" podUID="05818125-1b72-4058-ace6-04c114506db0" pod="openshift-marketplace/redhat-operators-fj572" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fj572\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:05 crc kubenswrapper[4796]: I1205 10:31:05.398203 4796 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:05 crc kubenswrapper[4796]: I1205 10:31:05.398577 4796 status_manager.go:851] "Failed to get status for pod" podUID="b84a3c6f-5d02-4cc3-8644-14c368436983" pod="openshift-marketplace/community-operators-p9cbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9cbq\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:07 crc kubenswrapper[4796]: E1205 10:31:07.554592 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:07 crc kubenswrapper[4796]: E1205 10:31:07.554940 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:07 crc kubenswrapper[4796]: E1205 10:31:07.555174 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:07 crc kubenswrapper[4796]: E1205 10:31:07.555407 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:07 crc kubenswrapper[4796]: E1205 10:31:07.555630 4796 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:07 crc kubenswrapper[4796]: I1205 10:31:07.555657 4796 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 05 10:31:07 crc kubenswrapper[4796]: E1205 10:31:07.555879 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.20:6443: connect: connection refused" interval="200ms" Dec 05 10:31:07 crc kubenswrapper[4796]: E1205 10:31:07.756959 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.20:6443: connect: connection refused" interval="400ms" Dec 05 10:31:08 crc kubenswrapper[4796]: E1205 10:31:08.157405 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.20:6443: connect: connection refused" interval="800ms" Dec 05 10:31:08 crc kubenswrapper[4796]: E1205 10:31:08.656354 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:31:08Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:31:08Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:31:08Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-05T10:31:08Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:08 crc kubenswrapper[4796]: E1205 10:31:08.656573 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:08 crc kubenswrapper[4796]: E1205 10:31:08.656767 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:08 crc kubenswrapper[4796]: E1205 10:31:08.656995 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:08 crc kubenswrapper[4796]: E1205 10:31:08.657170 4796 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:08 crc kubenswrapper[4796]: E1205 10:31:08.657188 4796 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 05 10:31:08 crc kubenswrapper[4796]: E1205 10:31:08.845637 4796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.25.20:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187e4b15976ded58 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-05 10:31:00.082773336 +0000 UTC m=+206.370878849,LastTimestamp:2025-12-05 10:31:00.082773336 +0000 UTC m=+206.370878849,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 05 10:31:08 crc kubenswrapper[4796]: E1205 10:31:08.958586 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.20:6443: connect: connection refused" interval="1.6s" Dec 05 10:31:10 crc kubenswrapper[4796]: E1205 10:31:10.559815 4796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 192.168.25.20:6443: connect: connection refused" interval="3.2s" Dec 05 10:31:11 crc kubenswrapper[4796]: I1205 10:31:11.030242 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:31:11 crc kubenswrapper[4796]: I1205 10:31:11.030664 4796 status_manager.go:851] "Failed to get status for pod" podUID="a6dc75b2-1e17-4aeb-a328-b430bd9e33a7" pod="openshift-marketplace/certified-operators-gklxw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gklxw\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:11 crc kubenswrapper[4796]: I1205 10:31:11.030946 4796 status_manager.go:851] "Failed to get status for pod" podUID="05818125-1b72-4058-ace6-04c114506db0" pod="openshift-marketplace/redhat-operators-fj572" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fj572\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:11 crc kubenswrapper[4796]: I1205 10:31:11.031194 4796 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:11 crc kubenswrapper[4796]: I1205 10:31:11.031435 4796 status_manager.go:851] "Failed to get status for pod" podUID="b84a3c6f-5d02-4cc3-8644-14c368436983" pod="openshift-marketplace/community-operators-p9cbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9cbq\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:11 crc kubenswrapper[4796]: I1205 10:31:11.031654 4796 status_manager.go:851] "Failed to get status for pod" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-9pllw\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:11 crc kubenswrapper[4796]: I1205 10:31:11.031868 4796 status_manager.go:851] "Failed to get status for pod" podUID="cc02dcd1-0779-4692-8d9a-78bd5cefa3ea" pod="openshift-marketplace/redhat-marketplace-qvw7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qvw7s\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:11 crc kubenswrapper[4796]: I1205 10:31:11.032083 4796 status_manager.go:851] "Failed to get status for pod" podUID="35c6cfc7-cda9-4c88-9354-27745015055f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:11 crc kubenswrapper[4796]: I1205 10:31:11.040298 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a02bec29-3b71-4b9d-ac9b-1226f8747525" Dec 05 10:31:11 crc kubenswrapper[4796]: I1205 10:31:11.040318 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a02bec29-3b71-4b9d-ac9b-1226f8747525" Dec 05 10:31:11 crc kubenswrapper[4796]: E1205 10:31:11.040561 4796 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.20:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:31:11 crc kubenswrapper[4796]: I1205 10:31:11.040951 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:31:11 crc kubenswrapper[4796]: W1205 10:31:11.054031 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-23de07547efdae8458a337b37b99353d5799456b8c84e8f5e7313077bed3ac03 WatchSource:0}: Error finding container 23de07547efdae8458a337b37b99353d5799456b8c84e8f5e7313077bed3ac03: Status 404 returned error can't find the container with id 23de07547efdae8458a337b37b99353d5799456b8c84e8f5e7313077bed3ac03 Dec 05 10:31:11 crc kubenswrapper[4796]: I1205 10:31:11.412853 4796 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="8d218f5f038558e3e18a40a90e27c7659c8b75a98ad52973a7890faa32aba0a7" exitCode=0 Dec 05 10:31:11 crc kubenswrapper[4796]: I1205 10:31:11.412944 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"8d218f5f038558e3e18a40a90e27c7659c8b75a98ad52973a7890faa32aba0a7"} Dec 05 10:31:11 crc kubenswrapper[4796]: I1205 10:31:11.413060 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"23de07547efdae8458a337b37b99353d5799456b8c84e8f5e7313077bed3ac03"} Dec 05 10:31:11 crc kubenswrapper[4796]: I1205 10:31:11.413267 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a02bec29-3b71-4b9d-ac9b-1226f8747525" Dec 05 10:31:11 crc kubenswrapper[4796]: I1205 10:31:11.413280 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a02bec29-3b71-4b9d-ac9b-1226f8747525" Dec 05 10:31:11 crc kubenswrapper[4796]: I1205 10:31:11.413527 4796 status_manager.go:851] "Failed to get status for pod" podUID="cc02dcd1-0779-4692-8d9a-78bd5cefa3ea" pod="openshift-marketplace/redhat-marketplace-qvw7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-qvw7s\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:11 crc kubenswrapper[4796]: E1205 10:31:11.413561 4796 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 192.168.25.20:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:31:11 crc kubenswrapper[4796]: I1205 10:31:11.413787 4796 status_manager.go:851] "Failed to get status for pod" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-9pllw\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:11 crc kubenswrapper[4796]: I1205 10:31:11.413997 4796 status_manager.go:851] "Failed to get status for pod" podUID="35c6cfc7-cda9-4c88-9354-27745015055f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:11 crc kubenswrapper[4796]: I1205 10:31:11.414181 4796 status_manager.go:851] "Failed to get status for pod" podUID="a6dc75b2-1e17-4aeb-a328-b430bd9e33a7" pod="openshift-marketplace/certified-operators-gklxw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-gklxw\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:11 crc kubenswrapper[4796]: I1205 10:31:11.414398 4796 status_manager.go:851] "Failed to get status for pod" podUID="05818125-1b72-4058-ace6-04c114506db0" pod="openshift-marketplace/redhat-operators-fj572" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fj572\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:11 crc kubenswrapper[4796]: I1205 10:31:11.414607 4796 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:11 crc kubenswrapper[4796]: I1205 10:31:11.414836 4796 status_manager.go:851] "Failed to get status for pod" podUID="b84a3c6f-5d02-4cc3-8644-14c368436983" pod="openshift-marketplace/community-operators-p9cbq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-p9cbq\": dial tcp 192.168.25.20:6443: connect: connection refused" Dec 05 10:31:12 crc kubenswrapper[4796]: I1205 10:31:12.420888 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ca1fe50f6733382370a6c232ab949e7bc0494284608438e9699e769247bc2005"} Dec 05 10:31:12 crc kubenswrapper[4796]: I1205 10:31:12.421100 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"43a81630622c188598a6ee36a85350c120a2638a3779f590285cd034d635359e"} Dec 05 10:31:12 crc kubenswrapper[4796]: I1205 10:31:12.421112 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2b5834ae771c627eda245e1356d43aa6c2747642d251fbc7fe248ebfd314d949"} Dec 05 10:31:12 crc kubenswrapper[4796]: I1205 10:31:12.421121 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b443d95d2f2b42e1d40973b6e8ca9c48d2555a6e11d46553fbe4045f95fa7fe1"} Dec 05 10:31:12 crc kubenswrapper[4796]: I1205 10:31:12.421128 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"07c125a0feda748855706651199199438542abfedc9671976badbf8da57c1344"} Dec 05 10:31:12 crc kubenswrapper[4796]: I1205 10:31:12.421325 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:31:12 crc kubenswrapper[4796]: I1205 10:31:12.421417 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a02bec29-3b71-4b9d-ac9b-1226f8747525" Dec 05 10:31:12 crc kubenswrapper[4796]: I1205 10:31:12.421437 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a02bec29-3b71-4b9d-ac9b-1226f8747525" Dec 05 10:31:13 crc kubenswrapper[4796]: I1205 10:31:13.426977 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 05 10:31:13 crc kubenswrapper[4796]: I1205 10:31:13.427025 4796 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9" exitCode=1 Dec 05 10:31:13 crc kubenswrapper[4796]: I1205 10:31:13.427051 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9"} Dec 05 10:31:13 crc kubenswrapper[4796]: I1205 10:31:13.427496 4796 scope.go:117] "RemoveContainer" containerID="f316587111a841fbb10488a09168dfeccde34d05f2deb0064857c1f6c1d70fb9" Dec 05 10:31:14 crc kubenswrapper[4796]: I1205 10:31:14.432655 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 05 10:31:14 crc kubenswrapper[4796]: I1205 10:31:14.432870 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5934345c8f599f742fd96a02f53decc02267f2f370011b306dcc3c35751219b6"} Dec 05 10:31:16 crc kubenswrapper[4796]: I1205 10:31:16.041992 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:31:16 crc kubenswrapper[4796]: I1205 10:31:16.042204 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:31:16 crc kubenswrapper[4796]: I1205 10:31:16.045853 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:31:16 crc kubenswrapper[4796]: I1205 10:31:16.848408 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 10:31:16 crc kubenswrapper[4796]: I1205 10:31:16.855243 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 10:31:17 crc kubenswrapper[4796]: I1205 10:31:17.433636 4796 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:31:17 crc kubenswrapper[4796]: I1205 10:31:17.443401 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 10:31:17 crc kubenswrapper[4796]: I1205 10:31:17.548801 4796 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="23a13b86-340a-4293-8f8a-1d8e4156fe9f" Dec 05 10:31:18 crc kubenswrapper[4796]: I1205 10:31:18.447257 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a02bec29-3b71-4b9d-ac9b-1226f8747525" Dec 05 10:31:18 crc kubenswrapper[4796]: I1205 10:31:18.447283 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a02bec29-3b71-4b9d-ac9b-1226f8747525" Dec 05 10:31:18 crc kubenswrapper[4796]: I1205 10:31:18.448851 4796 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="23a13b86-340a-4293-8f8a-1d8e4156fe9f" Dec 05 10:31:18 crc kubenswrapper[4796]: I1205 10:31:18.450299 4796 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://07c125a0feda748855706651199199438542abfedc9671976badbf8da57c1344" Dec 05 10:31:18 crc kubenswrapper[4796]: I1205 10:31:18.450317 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:31:19 crc kubenswrapper[4796]: I1205 10:31:19.449773 4796 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a02bec29-3b71-4b9d-ac9b-1226f8747525" Dec 05 10:31:19 crc kubenswrapper[4796]: I1205 10:31:19.449798 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a02bec29-3b71-4b9d-ac9b-1226f8747525" Dec 05 10:31:19 crc kubenswrapper[4796]: I1205 10:31:19.451657 4796 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="23a13b86-340a-4293-8f8a-1d8e4156fe9f" Dec 05 10:31:23 crc kubenswrapper[4796]: I1205 10:31:23.881118 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 05 10:31:23 crc kubenswrapper[4796]: I1205 10:31:23.920525 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 05 10:31:23 crc kubenswrapper[4796]: I1205 10:31:23.932142 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 05 10:31:24 crc kubenswrapper[4796]: I1205 10:31:24.810903 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 05 10:31:25 crc kubenswrapper[4796]: I1205 10:31:25.366169 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 05 10:31:25 crc kubenswrapper[4796]: I1205 10:31:25.858635 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 05 10:31:25 crc kubenswrapper[4796]: I1205 10:31:25.964810 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 05 10:31:26 crc kubenswrapper[4796]: I1205 10:31:26.142487 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 05 10:31:26 crc kubenswrapper[4796]: I1205 10:31:26.270387 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 05 10:31:26 crc kubenswrapper[4796]: I1205 10:31:26.283513 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 05 10:31:26 crc kubenswrapper[4796]: I1205 10:31:26.372261 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 05 10:31:26 crc kubenswrapper[4796]: I1205 10:31:26.892201 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 05 10:31:27 crc kubenswrapper[4796]: I1205 10:31:27.100220 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 05 10:31:27 crc kubenswrapper[4796]: I1205 10:31:27.230596 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 05 10:31:27 crc kubenswrapper[4796]: I1205 10:31:27.245771 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 05 10:31:27 crc kubenswrapper[4796]: I1205 10:31:27.297023 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 05 10:31:27 crc kubenswrapper[4796]: I1205 10:31:27.542340 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 05 10:31:28 crc kubenswrapper[4796]: I1205 10:31:28.242728 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 05 10:31:28 crc kubenswrapper[4796]: I1205 10:31:28.300616 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 05 10:31:28 crc kubenswrapper[4796]: I1205 10:31:28.475797 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 05 10:31:28 crc kubenswrapper[4796]: I1205 10:31:28.680271 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 05 10:31:28 crc kubenswrapper[4796]: I1205 10:31:28.744208 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 05 10:31:28 crc kubenswrapper[4796]: I1205 10:31:28.796234 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 05 10:31:28 crc kubenswrapper[4796]: I1205 10:31:28.872013 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 05 10:31:29 crc kubenswrapper[4796]: I1205 10:31:29.053448 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 05 10:31:29 crc kubenswrapper[4796]: I1205 10:31:29.173866 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 05 10:31:29 crc kubenswrapper[4796]: I1205 10:31:29.480817 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 05 10:31:29 crc kubenswrapper[4796]: I1205 10:31:29.723238 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 05 10:31:29 crc kubenswrapper[4796]: I1205 10:31:29.873538 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 05 10:31:29 crc kubenswrapper[4796]: I1205 10:31:29.954283 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 05 10:31:30 crc kubenswrapper[4796]: I1205 10:31:30.547624 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 05 10:31:30 crc kubenswrapper[4796]: I1205 10:31:30.697553 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 05 10:31:30 crc kubenswrapper[4796]: I1205 10:31:30.772269 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 05 10:31:31 crc kubenswrapper[4796]: I1205 10:31:31.088543 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 05 10:31:31 crc kubenswrapper[4796]: I1205 10:31:31.159775 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 05 10:31:31 crc kubenswrapper[4796]: I1205 10:31:31.197670 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 05 10:31:31 crc kubenswrapper[4796]: I1205 10:31:31.529003 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 05 10:31:31 crc kubenswrapper[4796]: I1205 10:31:31.845956 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 05 10:31:31 crc kubenswrapper[4796]: I1205 10:31:31.929913 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 05 10:31:32 crc kubenswrapper[4796]: I1205 10:31:32.011800 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 05 10:31:32 crc kubenswrapper[4796]: I1205 10:31:32.031093 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 05 10:31:32 crc kubenswrapper[4796]: I1205 10:31:32.089447 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 05 10:31:32 crc kubenswrapper[4796]: I1205 10:31:32.164800 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 05 10:31:32 crc kubenswrapper[4796]: I1205 10:31:32.432054 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 05 10:31:32 crc kubenswrapper[4796]: I1205 10:31:32.686351 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 05 10:31:32 crc kubenswrapper[4796]: I1205 10:31:32.812894 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 05 10:31:32 crc kubenswrapper[4796]: I1205 10:31:32.880391 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 05 10:31:32 crc kubenswrapper[4796]: I1205 10:31:32.901371 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 05 10:31:32 crc kubenswrapper[4796]: I1205 10:31:32.981472 4796 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 05 10:31:33 crc kubenswrapper[4796]: I1205 10:31:33.208898 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 05 10:31:33 crc kubenswrapper[4796]: I1205 10:31:33.249218 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 05 10:31:33 crc kubenswrapper[4796]: I1205 10:31:33.286020 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 05 10:31:33 crc kubenswrapper[4796]: I1205 10:31:33.351882 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 05 10:31:33 crc kubenswrapper[4796]: I1205 10:31:33.365532 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 05 10:31:33 crc kubenswrapper[4796]: I1205 10:31:33.443574 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 05 10:31:33 crc kubenswrapper[4796]: I1205 10:31:33.705452 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 05 10:31:33 crc kubenswrapper[4796]: I1205 10:31:33.819231 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 05 10:31:33 crc kubenswrapper[4796]: I1205 10:31:33.854513 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 05 10:31:33 crc kubenswrapper[4796]: I1205 10:31:33.883205 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 05 10:31:33 crc kubenswrapper[4796]: I1205 10:31:33.900180 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 05 10:31:33 crc kubenswrapper[4796]: I1205 10:31:33.948170 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 05 10:31:33 crc kubenswrapper[4796]: I1205 10:31:33.952091 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 05 10:31:34 crc kubenswrapper[4796]: I1205 10:31:34.003943 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 05 10:31:34 crc kubenswrapper[4796]: I1205 10:31:34.060026 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 05 10:31:34 crc kubenswrapper[4796]: I1205 10:31:34.090442 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 05 10:31:34 crc kubenswrapper[4796]: I1205 10:31:34.100750 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 05 10:31:34 crc kubenswrapper[4796]: I1205 10:31:34.163635 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 05 10:31:34 crc kubenswrapper[4796]: I1205 10:31:34.209845 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 05 10:31:34 crc kubenswrapper[4796]: I1205 10:31:34.272382 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 05 10:31:34 crc kubenswrapper[4796]: I1205 10:31:34.382732 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 05 10:31:34 crc kubenswrapper[4796]: I1205 10:31:34.471925 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 05 10:31:34 crc kubenswrapper[4796]: I1205 10:31:34.601253 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 05 10:31:34 crc kubenswrapper[4796]: I1205 10:31:34.617024 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 05 10:31:34 crc kubenswrapper[4796]: I1205 10:31:34.646537 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 05 10:31:34 crc kubenswrapper[4796]: I1205 10:31:34.659601 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 05 10:31:34 crc kubenswrapper[4796]: I1205 10:31:34.724273 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 05 10:31:34 crc kubenswrapper[4796]: I1205 10:31:34.734008 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 05 10:31:34 crc kubenswrapper[4796]: I1205 10:31:34.744820 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 05 10:31:34 crc kubenswrapper[4796]: I1205 10:31:34.838143 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 05 10:31:34 crc kubenswrapper[4796]: I1205 10:31:34.897468 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 05 10:31:34 crc kubenswrapper[4796]: I1205 10:31:34.945028 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 05 10:31:34 crc kubenswrapper[4796]: I1205 10:31:34.972523 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 05 10:31:35 crc kubenswrapper[4796]: I1205 10:31:35.019934 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 05 10:31:35 crc kubenswrapper[4796]: I1205 10:31:35.037003 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 05 10:31:35 crc kubenswrapper[4796]: I1205 10:31:35.190823 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 05 10:31:35 crc kubenswrapper[4796]: I1205 10:31:35.224546 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 05 10:31:35 crc kubenswrapper[4796]: I1205 10:31:35.283610 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 05 10:31:35 crc kubenswrapper[4796]: I1205 10:31:35.368260 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 05 10:31:35 crc kubenswrapper[4796]: I1205 10:31:35.385450 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 05 10:31:35 crc kubenswrapper[4796]: I1205 10:31:35.497761 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 05 10:31:35 crc kubenswrapper[4796]: I1205 10:31:35.523886 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 05 10:31:35 crc kubenswrapper[4796]: I1205 10:31:35.600235 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 05 10:31:35 crc kubenswrapper[4796]: I1205 10:31:35.624646 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 05 10:31:35 crc kubenswrapper[4796]: I1205 10:31:35.679459 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 05 10:31:35 crc kubenswrapper[4796]: I1205 10:31:35.703582 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 05 10:31:35 crc kubenswrapper[4796]: I1205 10:31:35.774330 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 05 10:31:35 crc kubenswrapper[4796]: I1205 10:31:35.882791 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 05 10:31:35 crc kubenswrapper[4796]: I1205 10:31:35.944629 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 05 10:31:35 crc kubenswrapper[4796]: I1205 10:31:35.946754 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 05 10:31:35 crc kubenswrapper[4796]: I1205 10:31:35.955014 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 05 10:31:35 crc kubenswrapper[4796]: I1205 10:31:35.972273 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 05 10:31:35 crc kubenswrapper[4796]: I1205 10:31:35.990021 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 05 10:31:36 crc kubenswrapper[4796]: I1205 10:31:36.004005 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 05 10:31:36 crc kubenswrapper[4796]: I1205 10:31:36.106070 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 05 10:31:36 crc kubenswrapper[4796]: I1205 10:31:36.241613 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 05 10:31:36 crc kubenswrapper[4796]: I1205 10:31:36.347358 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 05 10:31:36 crc kubenswrapper[4796]: I1205 10:31:36.427581 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 05 10:31:36 crc kubenswrapper[4796]: I1205 10:31:36.443586 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 05 10:31:36 crc kubenswrapper[4796]: I1205 10:31:36.499181 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 05 10:31:36 crc kubenswrapper[4796]: I1205 10:31:36.499518 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 05 10:31:36 crc kubenswrapper[4796]: I1205 10:31:36.524023 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 05 10:31:36 crc kubenswrapper[4796]: I1205 10:31:36.570212 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 05 10:31:36 crc kubenswrapper[4796]: I1205 10:31:36.639217 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 05 10:31:36 crc kubenswrapper[4796]: I1205 10:31:36.655253 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 05 10:31:36 crc kubenswrapper[4796]: I1205 10:31:36.661072 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 05 10:31:36 crc kubenswrapper[4796]: I1205 10:31:36.663575 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 05 10:31:36 crc kubenswrapper[4796]: I1205 10:31:36.739922 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 05 10:31:36 crc kubenswrapper[4796]: I1205 10:31:36.836192 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 05 10:31:36 crc kubenswrapper[4796]: I1205 10:31:36.846273 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 05 10:31:36 crc kubenswrapper[4796]: I1205 10:31:36.863241 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 05 10:31:36 crc kubenswrapper[4796]: I1205 10:31:36.890129 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 05 10:31:37 crc kubenswrapper[4796]: I1205 10:31:37.103129 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 05 10:31:37 crc kubenswrapper[4796]: I1205 10:31:37.148069 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 05 10:31:37 crc kubenswrapper[4796]: I1205 10:31:37.314380 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 05 10:31:37 crc kubenswrapper[4796]: I1205 10:31:37.373042 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 05 10:31:37 crc kubenswrapper[4796]: I1205 10:31:37.472259 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 05 10:31:37 crc kubenswrapper[4796]: I1205 10:31:37.540930 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 05 10:31:37 crc kubenswrapper[4796]: I1205 10:31:37.701736 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 05 10:31:37 crc kubenswrapper[4796]: I1205 10:31:37.733244 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 05 10:31:37 crc kubenswrapper[4796]: I1205 10:31:37.759088 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 05 10:31:37 crc kubenswrapper[4796]: I1205 10:31:37.943898 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 05 10:31:37 crc kubenswrapper[4796]: I1205 10:31:37.957083 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 05 10:31:37 crc kubenswrapper[4796]: I1205 10:31:37.961184 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 05 10:31:38 crc kubenswrapper[4796]: I1205 10:31:38.064282 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 05 10:31:38 crc kubenswrapper[4796]: I1205 10:31:38.094712 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 05 10:31:38 crc kubenswrapper[4796]: I1205 10:31:38.143479 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 05 10:31:38 crc kubenswrapper[4796]: I1205 10:31:38.156364 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 05 10:31:38 crc kubenswrapper[4796]: I1205 10:31:38.237650 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 05 10:31:38 crc kubenswrapper[4796]: I1205 10:31:38.373036 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 05 10:31:38 crc kubenswrapper[4796]: I1205 10:31:38.398500 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 05 10:31:38 crc kubenswrapper[4796]: I1205 10:31:38.408131 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 05 10:31:38 crc kubenswrapper[4796]: I1205 10:31:38.508235 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 05 10:31:38 crc kubenswrapper[4796]: I1205 10:31:38.532373 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 05 10:31:38 crc kubenswrapper[4796]: I1205 10:31:38.559204 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 05 10:31:38 crc kubenswrapper[4796]: I1205 10:31:38.606090 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 05 10:31:38 crc kubenswrapper[4796]: I1205 10:31:38.657018 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 05 10:31:38 crc kubenswrapper[4796]: I1205 10:31:38.659787 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 05 10:31:38 crc kubenswrapper[4796]: I1205 10:31:38.697649 4796 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 05 10:31:38 crc kubenswrapper[4796]: I1205 10:31:38.714532 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 05 10:31:38 crc kubenswrapper[4796]: I1205 10:31:38.786938 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 05 10:31:38 crc kubenswrapper[4796]: I1205 10:31:38.813852 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 05 10:31:38 crc kubenswrapper[4796]: I1205 10:31:38.833525 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 05 10:31:38 crc kubenswrapper[4796]: I1205 10:31:38.855062 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 05 10:31:38 crc kubenswrapper[4796]: I1205 10:31:38.889267 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 05 10:31:38 crc kubenswrapper[4796]: I1205 10:31:38.991125 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 05 10:31:39 crc kubenswrapper[4796]: I1205 10:31:39.011245 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 05 10:31:39 crc kubenswrapper[4796]: I1205 10:31:39.030111 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 05 10:31:39 crc kubenswrapper[4796]: I1205 10:31:39.103217 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 05 10:31:39 crc kubenswrapper[4796]: I1205 10:31:39.133720 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 05 10:31:39 crc kubenswrapper[4796]: I1205 10:31:39.396040 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 05 10:31:39 crc kubenswrapper[4796]: I1205 10:31:39.404447 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 05 10:31:39 crc kubenswrapper[4796]: I1205 10:31:39.411167 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 05 10:31:39 crc kubenswrapper[4796]: I1205 10:31:39.517630 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 05 10:31:39 crc kubenswrapper[4796]: I1205 10:31:39.781505 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 05 10:31:39 crc kubenswrapper[4796]: I1205 10:31:39.790714 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 05 10:31:39 crc kubenswrapper[4796]: I1205 10:31:39.926209 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 05 10:31:39 crc kubenswrapper[4796]: I1205 10:31:39.973182 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 05 10:31:39 crc kubenswrapper[4796]: I1205 10:31:39.976161 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.067920 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.120789 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.137628 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.200944 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.269797 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.293635 4796 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.318396 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.319917 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.417088 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.420760 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.476761 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.484718 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.510502 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.518324 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.527963 4796 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.529356 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=41.529344439 podStartE2EDuration="41.529344439s" podCreationTimestamp="2025-12-05 10:30:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:31:17.44829313 +0000 UTC m=+223.736398643" watchObservedRunningTime="2025-12-05 10:31:40.529344439 +0000 UTC m=+246.817449952" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.530967 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.531004 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.533747 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.543905 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.543896557 podStartE2EDuration="23.543896557s" podCreationTimestamp="2025-12-05 10:31:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:31:40.541778821 +0000 UTC m=+246.829884344" watchObservedRunningTime="2025-12-05 10:31:40.543896557 +0000 UTC m=+246.832002071" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.626484 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.630407 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.653840 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.686875 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.688632 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.703720 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.713458 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 05 10:31:40 crc kubenswrapper[4796]: I1205 10:31:40.940651 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 05 10:31:41 crc kubenswrapper[4796]: I1205 10:31:41.010433 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 05 10:31:41 crc kubenswrapper[4796]: I1205 10:31:41.076345 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 05 10:31:41 crc kubenswrapper[4796]: I1205 10:31:41.137500 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 05 10:31:41 crc kubenswrapper[4796]: I1205 10:31:41.237528 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 05 10:31:41 crc kubenswrapper[4796]: I1205 10:31:41.252722 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 05 10:31:41 crc kubenswrapper[4796]: I1205 10:31:41.404952 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 05 10:31:41 crc kubenswrapper[4796]: I1205 10:31:41.444534 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 05 10:31:41 crc kubenswrapper[4796]: I1205 10:31:41.471469 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 05 10:31:41 crc kubenswrapper[4796]: I1205 10:31:41.520846 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 05 10:31:41 crc kubenswrapper[4796]: I1205 10:31:41.610229 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 05 10:31:41 crc kubenswrapper[4796]: I1205 10:31:41.713249 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 05 10:31:41 crc kubenswrapper[4796]: I1205 10:31:41.737353 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 05 10:31:41 crc kubenswrapper[4796]: I1205 10:31:41.821390 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 05 10:31:41 crc kubenswrapper[4796]: I1205 10:31:41.995631 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 05 10:31:42 crc kubenswrapper[4796]: I1205 10:31:42.004657 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 05 10:31:42 crc kubenswrapper[4796]: I1205 10:31:42.032874 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 05 10:31:42 crc kubenswrapper[4796]: I1205 10:31:42.107936 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 05 10:31:42 crc kubenswrapper[4796]: I1205 10:31:42.133773 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 05 10:31:42 crc kubenswrapper[4796]: I1205 10:31:42.168640 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 05 10:31:42 crc kubenswrapper[4796]: I1205 10:31:42.362914 4796 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 05 10:31:42 crc kubenswrapper[4796]: I1205 10:31:42.378898 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 05 10:31:42 crc kubenswrapper[4796]: I1205 10:31:42.387410 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 05 10:31:42 crc kubenswrapper[4796]: I1205 10:31:42.414237 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 05 10:31:42 crc kubenswrapper[4796]: I1205 10:31:42.656069 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 05 10:31:42 crc kubenswrapper[4796]: I1205 10:31:42.713438 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 05 10:31:42 crc kubenswrapper[4796]: I1205 10:31:42.724141 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 05 10:31:42 crc kubenswrapper[4796]: I1205 10:31:42.728215 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 05 10:31:42 crc kubenswrapper[4796]: I1205 10:31:42.791241 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 05 10:31:42 crc kubenswrapper[4796]: I1205 10:31:42.896153 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 05 10:31:42 crc kubenswrapper[4796]: I1205 10:31:42.897435 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 05 10:31:42 crc kubenswrapper[4796]: I1205 10:31:42.947502 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 05 10:31:42 crc kubenswrapper[4796]: I1205 10:31:42.978282 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 05 10:31:43 crc kubenswrapper[4796]: I1205 10:31:43.003991 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 05 10:31:43 crc kubenswrapper[4796]: I1205 10:31:43.079619 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 05 10:31:43 crc kubenswrapper[4796]: I1205 10:31:43.161896 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 05 10:31:43 crc kubenswrapper[4796]: I1205 10:31:43.223745 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 05 10:31:43 crc kubenswrapper[4796]: I1205 10:31:43.392889 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 05 10:31:43 crc kubenswrapper[4796]: I1205 10:31:43.403848 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 05 10:31:43 crc kubenswrapper[4796]: I1205 10:31:43.586151 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 05 10:31:43 crc kubenswrapper[4796]: I1205 10:31:43.729571 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 05 10:31:43 crc kubenswrapper[4796]: I1205 10:31:43.829877 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 05 10:31:43 crc kubenswrapper[4796]: I1205 10:31:43.863245 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 05 10:31:43 crc kubenswrapper[4796]: I1205 10:31:43.935511 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 05 10:31:44 crc kubenswrapper[4796]: I1205 10:31:44.046039 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 05 10:31:44 crc kubenswrapper[4796]: I1205 10:31:44.071317 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 05 10:31:44 crc kubenswrapper[4796]: I1205 10:31:44.087153 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 05 10:31:44 crc kubenswrapper[4796]: I1205 10:31:44.281282 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 05 10:31:44 crc kubenswrapper[4796]: I1205 10:31:44.645620 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 05 10:31:44 crc kubenswrapper[4796]: I1205 10:31:44.804171 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 05 10:31:44 crc kubenswrapper[4796]: I1205 10:31:44.931456 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 05 10:31:45 crc kubenswrapper[4796]: I1205 10:31:45.054117 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 05 10:31:45 crc kubenswrapper[4796]: I1205 10:31:45.500230 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 05 10:31:45 crc kubenswrapper[4796]: I1205 10:31:45.518059 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 05 10:31:45 crc kubenswrapper[4796]: I1205 10:31:45.611919 4796 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 05 10:31:45 crc kubenswrapper[4796]: I1205 10:31:45.742110 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 05 10:31:45 crc kubenswrapper[4796]: I1205 10:31:45.868964 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 05 10:31:46 crc kubenswrapper[4796]: I1205 10:31:46.137605 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 05 10:31:46 crc kubenswrapper[4796]: I1205 10:31:46.311039 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 05 10:31:46 crc kubenswrapper[4796]: I1205 10:31:46.574319 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 05 10:31:46 crc kubenswrapper[4796]: I1205 10:31:46.664801 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 05 10:31:46 crc kubenswrapper[4796]: I1205 10:31:46.915782 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 05 10:31:47 crc kubenswrapper[4796]: I1205 10:31:47.995783 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 05 10:31:51 crc kubenswrapper[4796]: I1205 10:31:51.418332 4796 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 05 10:31:51 crc kubenswrapper[4796]: I1205 10:31:51.418778 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://68d41a818190d5ee6efbd83547ba2bd5779b6e8f1d9c698ad11921e5ba72382a" gracePeriod=5 Dec 05 10:31:56 crc kubenswrapper[4796]: I1205 10:31:56.589555 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 05 10:31:56 crc kubenswrapper[4796]: I1205 10:31:56.590094 4796 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="68d41a818190d5ee6efbd83547ba2bd5779b6e8f1d9c698ad11921e5ba72382a" exitCode=137 Dec 05 10:31:56 crc kubenswrapper[4796]: I1205 10:31:56.965926 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 05 10:31:56 crc kubenswrapper[4796]: I1205 10:31:56.966006 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 10:31:57 crc kubenswrapper[4796]: I1205 10:31:57.079847 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 10:31:57 crc kubenswrapper[4796]: I1205 10:31:57.079915 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 10:31:57 crc kubenswrapper[4796]: I1205 10:31:57.079974 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:31:57 crc kubenswrapper[4796]: I1205 10:31:57.080020 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 10:31:57 crc kubenswrapper[4796]: I1205 10:31:57.080108 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 10:31:57 crc kubenswrapper[4796]: I1205 10:31:57.080139 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 05 10:31:57 crc kubenswrapper[4796]: I1205 10:31:57.080253 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:31:57 crc kubenswrapper[4796]: I1205 10:31:57.080276 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:31:57 crc kubenswrapper[4796]: I1205 10:31:57.080369 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:31:57 crc kubenswrapper[4796]: I1205 10:31:57.080528 4796 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 05 10:31:57 crc kubenswrapper[4796]: I1205 10:31:57.080714 4796 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 05 10:31:57 crc kubenswrapper[4796]: I1205 10:31:57.080795 4796 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 05 10:31:57 crc kubenswrapper[4796]: I1205 10:31:57.080862 4796 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 05 10:31:57 crc kubenswrapper[4796]: I1205 10:31:57.088130 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:31:57 crc kubenswrapper[4796]: I1205 10:31:57.182075 4796 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 05 10:31:57 crc kubenswrapper[4796]: I1205 10:31:57.595295 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 05 10:31:57 crc kubenswrapper[4796]: I1205 10:31:57.595360 4796 scope.go:117] "RemoveContainer" containerID="68d41a818190d5ee6efbd83547ba2bd5779b6e8f1d9c698ad11921e5ba72382a" Dec 05 10:31:57 crc kubenswrapper[4796]: I1205 10:31:57.595413 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 05 10:31:58 crc kubenswrapper[4796]: I1205 10:31:58.035580 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 05 10:31:58 crc kubenswrapper[4796]: I1205 10:31:58.036074 4796 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 05 10:31:58 crc kubenswrapper[4796]: I1205 10:31:58.043579 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 05 10:31:58 crc kubenswrapper[4796]: I1205 10:31:58.043611 4796 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6d4efede-f168-4135-a005-feff81c07a4c" Dec 05 10:31:58 crc kubenswrapper[4796]: I1205 10:31:58.045794 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 05 10:31:58 crc kubenswrapper[4796]: I1205 10:31:58.045821 4796 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6d4efede-f168-4135-a005-feff81c07a4c" Dec 05 10:33:05 crc kubenswrapper[4796]: I1205 10:33:05.177213 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:33:05 crc kubenswrapper[4796]: I1205 10:33:05.177611 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.533411 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-l5zs6"] Dec 05 10:33:07 crc kubenswrapper[4796]: E1205 10:33:07.533786 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.533798 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 05 10:33:07 crc kubenswrapper[4796]: E1205 10:33:07.533807 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c6cfc7-cda9-4c88-9354-27745015055f" containerName="installer" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.533813 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c6cfc7-cda9-4c88-9354-27745015055f" containerName="installer" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.533892 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.533903 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="35c6cfc7-cda9-4c88-9354-27745015055f" containerName="installer" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.534242 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.541102 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-l5zs6"] Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.657474 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f9cff24-bb60-49a8-a7d9-cb129a4c7050-trusted-ca\") pod \"image-registry-66df7c8f76-l5zs6\" (UID: \"9f9cff24-bb60-49a8-a7d9-cb129a4c7050\") " pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.657546 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-l5zs6\" (UID: \"9f9cff24-bb60-49a8-a7d9-cb129a4c7050\") " pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.657572 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9f9cff24-bb60-49a8-a7d9-cb129a4c7050-registry-tls\") pod \"image-registry-66df7c8f76-l5zs6\" (UID: \"9f9cff24-bb60-49a8-a7d9-cb129a4c7050\") " pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.657611 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4pwf\" (UniqueName: \"kubernetes.io/projected/9f9cff24-bb60-49a8-a7d9-cb129a4c7050-kube-api-access-x4pwf\") pod \"image-registry-66df7c8f76-l5zs6\" (UID: \"9f9cff24-bb60-49a8-a7d9-cb129a4c7050\") " pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.657652 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9f9cff24-bb60-49a8-a7d9-cb129a4c7050-installation-pull-secrets\") pod \"image-registry-66df7c8f76-l5zs6\" (UID: \"9f9cff24-bb60-49a8-a7d9-cb129a4c7050\") " pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.657670 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9f9cff24-bb60-49a8-a7d9-cb129a4c7050-registry-certificates\") pod \"image-registry-66df7c8f76-l5zs6\" (UID: \"9f9cff24-bb60-49a8-a7d9-cb129a4c7050\") " pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.657724 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9f9cff24-bb60-49a8-a7d9-cb129a4c7050-ca-trust-extracted\") pod \"image-registry-66df7c8f76-l5zs6\" (UID: \"9f9cff24-bb60-49a8-a7d9-cb129a4c7050\") " pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.657748 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f9cff24-bb60-49a8-a7d9-cb129a4c7050-bound-sa-token\") pod \"image-registry-66df7c8f76-l5zs6\" (UID: \"9f9cff24-bb60-49a8-a7d9-cb129a4c7050\") " pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.672699 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-l5zs6\" (UID: \"9f9cff24-bb60-49a8-a7d9-cb129a4c7050\") " pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.758571 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4pwf\" (UniqueName: \"kubernetes.io/projected/9f9cff24-bb60-49a8-a7d9-cb129a4c7050-kube-api-access-x4pwf\") pod \"image-registry-66df7c8f76-l5zs6\" (UID: \"9f9cff24-bb60-49a8-a7d9-cb129a4c7050\") " pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.758617 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9f9cff24-bb60-49a8-a7d9-cb129a4c7050-installation-pull-secrets\") pod \"image-registry-66df7c8f76-l5zs6\" (UID: \"9f9cff24-bb60-49a8-a7d9-cb129a4c7050\") " pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.758640 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9f9cff24-bb60-49a8-a7d9-cb129a4c7050-registry-certificates\") pod \"image-registry-66df7c8f76-l5zs6\" (UID: \"9f9cff24-bb60-49a8-a7d9-cb129a4c7050\") " pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.758665 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9f9cff24-bb60-49a8-a7d9-cb129a4c7050-ca-trust-extracted\") pod \"image-registry-66df7c8f76-l5zs6\" (UID: \"9f9cff24-bb60-49a8-a7d9-cb129a4c7050\") " pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.758705 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f9cff24-bb60-49a8-a7d9-cb129a4c7050-bound-sa-token\") pod \"image-registry-66df7c8f76-l5zs6\" (UID: \"9f9cff24-bb60-49a8-a7d9-cb129a4c7050\") " pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.758724 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f9cff24-bb60-49a8-a7d9-cb129a4c7050-trusted-ca\") pod \"image-registry-66df7c8f76-l5zs6\" (UID: \"9f9cff24-bb60-49a8-a7d9-cb129a4c7050\") " pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.758762 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9f9cff24-bb60-49a8-a7d9-cb129a4c7050-registry-tls\") pod \"image-registry-66df7c8f76-l5zs6\" (UID: \"9f9cff24-bb60-49a8-a7d9-cb129a4c7050\") " pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.759170 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9f9cff24-bb60-49a8-a7d9-cb129a4c7050-ca-trust-extracted\") pod \"image-registry-66df7c8f76-l5zs6\" (UID: \"9f9cff24-bb60-49a8-a7d9-cb129a4c7050\") " pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.759855 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f9cff24-bb60-49a8-a7d9-cb129a4c7050-trusted-ca\") pod \"image-registry-66df7c8f76-l5zs6\" (UID: \"9f9cff24-bb60-49a8-a7d9-cb129a4c7050\") " pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.759931 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9f9cff24-bb60-49a8-a7d9-cb129a4c7050-registry-certificates\") pod \"image-registry-66df7c8f76-l5zs6\" (UID: \"9f9cff24-bb60-49a8-a7d9-cb129a4c7050\") " pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.763248 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9f9cff24-bb60-49a8-a7d9-cb129a4c7050-registry-tls\") pod \"image-registry-66df7c8f76-l5zs6\" (UID: \"9f9cff24-bb60-49a8-a7d9-cb129a4c7050\") " pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.763273 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9f9cff24-bb60-49a8-a7d9-cb129a4c7050-installation-pull-secrets\") pod \"image-registry-66df7c8f76-l5zs6\" (UID: \"9f9cff24-bb60-49a8-a7d9-cb129a4c7050\") " pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.771452 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f9cff24-bb60-49a8-a7d9-cb129a4c7050-bound-sa-token\") pod \"image-registry-66df7c8f76-l5zs6\" (UID: \"9f9cff24-bb60-49a8-a7d9-cb129a4c7050\") " pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.771592 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4pwf\" (UniqueName: \"kubernetes.io/projected/9f9cff24-bb60-49a8-a7d9-cb129a4c7050-kube-api-access-x4pwf\") pod \"image-registry-66df7c8f76-l5zs6\" (UID: \"9f9cff24-bb60-49a8-a7d9-cb129a4c7050\") " pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:07 crc kubenswrapper[4796]: I1205 10:33:07.845233 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:08 crc kubenswrapper[4796]: I1205 10:33:08.173018 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-l5zs6"] Dec 05 10:33:08 crc kubenswrapper[4796]: I1205 10:33:08.851853 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" event={"ID":"9f9cff24-bb60-49a8-a7d9-cb129a4c7050","Type":"ContainerStarted","Data":"0a58bbaa6da1759c70141fb7fc98b40226c25ecdcb625ffbbcb0f00dc811f7db"} Dec 05 10:33:08 crc kubenswrapper[4796]: I1205 10:33:08.852178 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:08 crc kubenswrapper[4796]: I1205 10:33:08.852189 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" event={"ID":"9f9cff24-bb60-49a8-a7d9-cb129a4c7050","Type":"ContainerStarted","Data":"a50ffd39e9a551985f173ac70b5d450c3bdbf34adb96892e86cb600d7feae55f"} Dec 05 10:33:08 crc kubenswrapper[4796]: I1205 10:33:08.865560 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" podStartSLOduration=1.865549547 podStartE2EDuration="1.865549547s" podCreationTimestamp="2025-12-05 10:33:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:33:08.86288498 +0000 UTC m=+335.150990503" watchObservedRunningTime="2025-12-05 10:33:08.865549547 +0000 UTC m=+335.153655050" Dec 05 10:33:27 crc kubenswrapper[4796]: I1205 10:33:27.848842 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-l5zs6" Dec 05 10:33:27 crc kubenswrapper[4796]: I1205 10:33:27.878913 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-k9j8l"] Dec 05 10:33:35 crc kubenswrapper[4796]: I1205 10:33:35.177087 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:33:35 crc kubenswrapper[4796]: I1205 10:33:35.177463 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:33:52 crc kubenswrapper[4796]: I1205 10:33:52.901629 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" podUID="b5fec51d-bef3-426c-ba74-90a48a94d9ce" containerName="registry" containerID="cri-o://4aa8ad3a55294c79cf4f2e9f09944105ff3e4c89815d9978f487a54aff31dcf0" gracePeriod=30 Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.157769 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.301482 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b5fec51d-bef3-426c-ba74-90a48a94d9ce-registry-certificates\") pod \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.301519 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktqsk\" (UniqueName: \"kubernetes.io/projected/b5fec51d-bef3-426c-ba74-90a48a94d9ce-kube-api-access-ktqsk\") pod \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.301549 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5fec51d-bef3-426c-ba74-90a48a94d9ce-bound-sa-token\") pod \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.301566 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b5fec51d-bef3-426c-ba74-90a48a94d9ce-registry-tls\") pod \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.301646 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.301714 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b5fec51d-bef3-426c-ba74-90a48a94d9ce-installation-pull-secrets\") pod \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.302523 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b5fec51d-bef3-426c-ba74-90a48a94d9ce-ca-trust-extracted\") pod \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.302543 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5fec51d-bef3-426c-ba74-90a48a94d9ce-trusted-ca\") pod \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\" (UID: \"b5fec51d-bef3-426c-ba74-90a48a94d9ce\") " Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.302549 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5fec51d-bef3-426c-ba74-90a48a94d9ce-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "b5fec51d-bef3-426c-ba74-90a48a94d9ce" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.303008 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5fec51d-bef3-426c-ba74-90a48a94d9ce-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "b5fec51d-bef3-426c-ba74-90a48a94d9ce" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.303082 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5fec51d-bef3-426c-ba74-90a48a94d9ce-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.303100 4796 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b5fec51d-bef3-426c-ba74-90a48a94d9ce-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.307181 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5fec51d-bef3-426c-ba74-90a48a94d9ce-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "b5fec51d-bef3-426c-ba74-90a48a94d9ce" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.307466 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5fec51d-bef3-426c-ba74-90a48a94d9ce-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "b5fec51d-bef3-426c-ba74-90a48a94d9ce" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.307831 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5fec51d-bef3-426c-ba74-90a48a94d9ce-kube-api-access-ktqsk" (OuterVolumeSpecName: "kube-api-access-ktqsk") pod "b5fec51d-bef3-426c-ba74-90a48a94d9ce" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce"). InnerVolumeSpecName "kube-api-access-ktqsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.307934 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5fec51d-bef3-426c-ba74-90a48a94d9ce-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "b5fec51d-bef3-426c-ba74-90a48a94d9ce" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.308967 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "b5fec51d-bef3-426c-ba74-90a48a94d9ce" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.315729 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5fec51d-bef3-426c-ba74-90a48a94d9ce-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "b5fec51d-bef3-426c-ba74-90a48a94d9ce" (UID: "b5fec51d-bef3-426c-ba74-90a48a94d9ce"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.403451 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktqsk\" (UniqueName: \"kubernetes.io/projected/b5fec51d-bef3-426c-ba74-90a48a94d9ce-kube-api-access-ktqsk\") on node \"crc\" DevicePath \"\"" Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.403474 4796 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5fec51d-bef3-426c-ba74-90a48a94d9ce-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.403483 4796 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b5fec51d-bef3-426c-ba74-90a48a94d9ce-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.403491 4796 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b5fec51d-bef3-426c-ba74-90a48a94d9ce-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 05 10:33:53 crc kubenswrapper[4796]: I1205 10:33:53.403499 4796 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b5fec51d-bef3-426c-ba74-90a48a94d9ce-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 05 10:33:54 crc kubenswrapper[4796]: I1205 10:33:54.010468 4796 generic.go:334] "Generic (PLEG): container finished" podID="b5fec51d-bef3-426c-ba74-90a48a94d9ce" containerID="4aa8ad3a55294c79cf4f2e9f09944105ff3e4c89815d9978f487a54aff31dcf0" exitCode=0 Dec 05 10:33:54 crc kubenswrapper[4796]: I1205 10:33:54.010501 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" event={"ID":"b5fec51d-bef3-426c-ba74-90a48a94d9ce","Type":"ContainerDied","Data":"4aa8ad3a55294c79cf4f2e9f09944105ff3e4c89815d9978f487a54aff31dcf0"} Dec 05 10:33:54 crc kubenswrapper[4796]: I1205 10:33:54.010532 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" event={"ID":"b5fec51d-bef3-426c-ba74-90a48a94d9ce","Type":"ContainerDied","Data":"9708a6c108871f89caa1bd938519c44bc3bdd3e8abe9073b92d7e6f1d034828f"} Dec 05 10:33:54 crc kubenswrapper[4796]: I1205 10:33:54.010536 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-k9j8l" Dec 05 10:33:54 crc kubenswrapper[4796]: I1205 10:33:54.010549 4796 scope.go:117] "RemoveContainer" containerID="4aa8ad3a55294c79cf4f2e9f09944105ff3e4c89815d9978f487a54aff31dcf0" Dec 05 10:33:54 crc kubenswrapper[4796]: I1205 10:33:54.025994 4796 scope.go:117] "RemoveContainer" containerID="4aa8ad3a55294c79cf4f2e9f09944105ff3e4c89815d9978f487a54aff31dcf0" Dec 05 10:33:54 crc kubenswrapper[4796]: E1205 10:33:54.026392 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4aa8ad3a55294c79cf4f2e9f09944105ff3e4c89815d9978f487a54aff31dcf0\": container with ID starting with 4aa8ad3a55294c79cf4f2e9f09944105ff3e4c89815d9978f487a54aff31dcf0 not found: ID does not exist" containerID="4aa8ad3a55294c79cf4f2e9f09944105ff3e4c89815d9978f487a54aff31dcf0" Dec 05 10:33:54 crc kubenswrapper[4796]: I1205 10:33:54.026442 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4aa8ad3a55294c79cf4f2e9f09944105ff3e4c89815d9978f487a54aff31dcf0"} err="failed to get container status \"4aa8ad3a55294c79cf4f2e9f09944105ff3e4c89815d9978f487a54aff31dcf0\": rpc error: code = NotFound desc = could not find container \"4aa8ad3a55294c79cf4f2e9f09944105ff3e4c89815d9978f487a54aff31dcf0\": container with ID starting with 4aa8ad3a55294c79cf4f2e9f09944105ff3e4c89815d9978f487a54aff31dcf0 not found: ID does not exist" Dec 05 10:33:54 crc kubenswrapper[4796]: I1205 10:33:54.039056 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-k9j8l"] Dec 05 10:33:54 crc kubenswrapper[4796]: I1205 10:33:54.039089 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-k9j8l"] Dec 05 10:33:56 crc kubenswrapper[4796]: I1205 10:33:56.036229 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5fec51d-bef3-426c-ba74-90a48a94d9ce" path="/var/lib/kubelet/pods/b5fec51d-bef3-426c-ba74-90a48a94d9ce/volumes" Dec 05 10:34:05 crc kubenswrapper[4796]: I1205 10:34:05.177323 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:34:05 crc kubenswrapper[4796]: I1205 10:34:05.177827 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:34:05 crc kubenswrapper[4796]: I1205 10:34:05.177876 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 10:34:05 crc kubenswrapper[4796]: I1205 10:34:05.178440 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b237acecfc6f07a912dd5d391cf00caaa99fd4cca591ed146b85426c5cf97464"} pod="openshift-machine-config-operator/machine-config-daemon-9pllw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 10:34:05 crc kubenswrapper[4796]: I1205 10:34:05.178493 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" containerID="cri-o://b237acecfc6f07a912dd5d391cf00caaa99fd4cca591ed146b85426c5cf97464" gracePeriod=600 Dec 05 10:34:06 crc kubenswrapper[4796]: I1205 10:34:06.051450 4796 generic.go:334] "Generic (PLEG): container finished" podID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerID="b237acecfc6f07a912dd5d391cf00caaa99fd4cca591ed146b85426c5cf97464" exitCode=0 Dec 05 10:34:06 crc kubenswrapper[4796]: I1205 10:34:06.051505 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerDied","Data":"b237acecfc6f07a912dd5d391cf00caaa99fd4cca591ed146b85426c5cf97464"} Dec 05 10:34:06 crc kubenswrapper[4796]: I1205 10:34:06.051700 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerStarted","Data":"5f20b7eecdefbb86282fce61808a4c3f117c21f24b1bad20aefe872aef8a2e8b"} Dec 05 10:34:06 crc kubenswrapper[4796]: I1205 10:34:06.051722 4796 scope.go:117] "RemoveContainer" containerID="5c0766e0a438d3073e224d9fc409ff70228029c3a307f0b67e5be9cc82e09add" Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.460391 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-qvqsf"] Dec 05 10:35:59 crc kubenswrapper[4796]: E1205 10:35:59.461010 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5fec51d-bef3-426c-ba74-90a48a94d9ce" containerName="registry" Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.461022 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5fec51d-bef3-426c-ba74-90a48a94d9ce" containerName="registry" Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.461100 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5fec51d-bef3-426c-ba74-90a48a94d9ce" containerName="registry" Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.461463 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-qvqsf" Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.462726 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-znnp5"] Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.463242 4796 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-kgzqx" Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.463273 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-znnp5" Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.464499 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.465086 4796 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lz4nv" Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.465234 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.471497 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-qvqsf"] Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.474567 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-kgfhs"] Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.475154 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-kgfhs" Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.476972 4796 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-mknth" Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.479229 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-znnp5"] Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.484848 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-kgfhs"] Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.553467 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lptpl\" (UniqueName: \"kubernetes.io/projected/55906207-dd26-4827-a801-1808e140a903-kube-api-access-lptpl\") pod \"cert-manager-cainjector-7f985d654d-qvqsf\" (UID: \"55906207-dd26-4827-a801-1808e140a903\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-qvqsf" Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.553500 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwmbs\" (UniqueName: \"kubernetes.io/projected/4fbc7cf7-cc54-4a42-af7c-7c7451d7bfc0-kube-api-access-lwmbs\") pod \"cert-manager-5b446d88c5-znnp5\" (UID: \"4fbc7cf7-cc54-4a42-af7c-7c7451d7bfc0\") " pod="cert-manager/cert-manager-5b446d88c5-znnp5" Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.553584 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnzw9\" (UniqueName: \"kubernetes.io/projected/dd1a1054-f7c6-4515-9c64-074ce87c169f-kube-api-access-xnzw9\") pod \"cert-manager-webhook-5655c58dd6-kgfhs\" (UID: \"dd1a1054-f7c6-4515-9c64-074ce87c169f\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-kgfhs" Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.654175 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnzw9\" (UniqueName: \"kubernetes.io/projected/dd1a1054-f7c6-4515-9c64-074ce87c169f-kube-api-access-xnzw9\") pod \"cert-manager-webhook-5655c58dd6-kgfhs\" (UID: \"dd1a1054-f7c6-4515-9c64-074ce87c169f\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-kgfhs" Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.654235 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lptpl\" (UniqueName: \"kubernetes.io/projected/55906207-dd26-4827-a801-1808e140a903-kube-api-access-lptpl\") pod \"cert-manager-cainjector-7f985d654d-qvqsf\" (UID: \"55906207-dd26-4827-a801-1808e140a903\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-qvqsf" Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.654259 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwmbs\" (UniqueName: \"kubernetes.io/projected/4fbc7cf7-cc54-4a42-af7c-7c7451d7bfc0-kube-api-access-lwmbs\") pod \"cert-manager-5b446d88c5-znnp5\" (UID: \"4fbc7cf7-cc54-4a42-af7c-7c7451d7bfc0\") " pod="cert-manager/cert-manager-5b446d88c5-znnp5" Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.671540 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnzw9\" (UniqueName: \"kubernetes.io/projected/dd1a1054-f7c6-4515-9c64-074ce87c169f-kube-api-access-xnzw9\") pod \"cert-manager-webhook-5655c58dd6-kgfhs\" (UID: \"dd1a1054-f7c6-4515-9c64-074ce87c169f\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-kgfhs" Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.671571 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwmbs\" (UniqueName: \"kubernetes.io/projected/4fbc7cf7-cc54-4a42-af7c-7c7451d7bfc0-kube-api-access-lwmbs\") pod \"cert-manager-5b446d88c5-znnp5\" (UID: \"4fbc7cf7-cc54-4a42-af7c-7c7451d7bfc0\") " pod="cert-manager/cert-manager-5b446d88c5-znnp5" Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.671652 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lptpl\" (UniqueName: \"kubernetes.io/projected/55906207-dd26-4827-a801-1808e140a903-kube-api-access-lptpl\") pod \"cert-manager-cainjector-7f985d654d-qvqsf\" (UID: \"55906207-dd26-4827-a801-1808e140a903\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-qvqsf" Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.774964 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-qvqsf" Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.781575 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-znnp5" Dec 05 10:35:59 crc kubenswrapper[4796]: I1205 10:35:59.787603 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-kgfhs" Dec 05 10:36:00 crc kubenswrapper[4796]: I1205 10:36:00.134851 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-qvqsf"] Dec 05 10:36:00 crc kubenswrapper[4796]: I1205 10:36:00.141028 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 10:36:00 crc kubenswrapper[4796]: I1205 10:36:00.170661 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-znnp5"] Dec 05 10:36:00 crc kubenswrapper[4796]: W1205 10:36:00.173202 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fbc7cf7_cc54_4a42_af7c_7c7451d7bfc0.slice/crio-766aa05a9b1fb1869aff211093c0aad8d99917e2bc82027e92b12d63df0d324b WatchSource:0}: Error finding container 766aa05a9b1fb1869aff211093c0aad8d99917e2bc82027e92b12d63df0d324b: Status 404 returned error can't find the container with id 766aa05a9b1fb1869aff211093c0aad8d99917e2bc82027e92b12d63df0d324b Dec 05 10:36:00 crc kubenswrapper[4796]: I1205 10:36:00.180710 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-kgfhs"] Dec 05 10:36:00 crc kubenswrapper[4796]: W1205 10:36:00.183360 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd1a1054_f7c6_4515_9c64_074ce87c169f.slice/crio-e7b27490f38f8c7339dac4ed6c49d113d5f2652245fb52fc548310932fc422c6 WatchSource:0}: Error finding container e7b27490f38f8c7339dac4ed6c49d113d5f2652245fb52fc548310932fc422c6: Status 404 returned error can't find the container with id e7b27490f38f8c7339dac4ed6c49d113d5f2652245fb52fc548310932fc422c6 Dec 05 10:36:00 crc kubenswrapper[4796]: I1205 10:36:00.452117 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-kgfhs" event={"ID":"dd1a1054-f7c6-4515-9c64-074ce87c169f","Type":"ContainerStarted","Data":"e7b27490f38f8c7339dac4ed6c49d113d5f2652245fb52fc548310932fc422c6"} Dec 05 10:36:00 crc kubenswrapper[4796]: I1205 10:36:00.453352 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-qvqsf" event={"ID":"55906207-dd26-4827-a801-1808e140a903","Type":"ContainerStarted","Data":"92118872652cf3a5be82753d7e85473f60f5e062fcadcb40fcf456747b35b9e8"} Dec 05 10:36:00 crc kubenswrapper[4796]: I1205 10:36:00.454136 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-znnp5" event={"ID":"4fbc7cf7-cc54-4a42-af7c-7c7451d7bfc0","Type":"ContainerStarted","Data":"766aa05a9b1fb1869aff211093c0aad8d99917e2bc82027e92b12d63df0d324b"} Dec 05 10:36:03 crc kubenswrapper[4796]: I1205 10:36:03.475259 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-znnp5" event={"ID":"4fbc7cf7-cc54-4a42-af7c-7c7451d7bfc0","Type":"ContainerStarted","Data":"6300c26128772f7c36dfe6f55c31ac45af7acf8fdb716fe5a2b044aecb2b3db9"} Dec 05 10:36:03 crc kubenswrapper[4796]: I1205 10:36:03.477407 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-kgfhs" event={"ID":"dd1a1054-f7c6-4515-9c64-074ce87c169f","Type":"ContainerStarted","Data":"9b94b896c645b480146a6493cd529ae4b2379f2626c68ec1d08088acb48ee872"} Dec 05 10:36:03 crc kubenswrapper[4796]: I1205 10:36:03.477658 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-kgfhs" Dec 05 10:36:03 crc kubenswrapper[4796]: I1205 10:36:03.479534 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-qvqsf" event={"ID":"55906207-dd26-4827-a801-1808e140a903","Type":"ContainerStarted","Data":"59db57a95640fbed14d2c9b0b26ada1edbfb049336dda03936a70d5f2dbbbcf2"} Dec 05 10:36:03 crc kubenswrapper[4796]: I1205 10:36:03.487375 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-znnp5" podStartSLOduration=2.533572916 podStartE2EDuration="4.48735993s" podCreationTimestamp="2025-12-05 10:35:59 +0000 UTC" firstStartedPulling="2025-12-05 10:36:00.175071227 +0000 UTC m=+506.463176740" lastFinishedPulling="2025-12-05 10:36:02.12885824 +0000 UTC m=+508.416963754" observedRunningTime="2025-12-05 10:36:03.487210759 +0000 UTC m=+509.775316282" watchObservedRunningTime="2025-12-05 10:36:03.48735993 +0000 UTC m=+509.775465443" Dec 05 10:36:03 crc kubenswrapper[4796]: I1205 10:36:03.498870 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-qvqsf" podStartSLOduration=2.135831407 podStartE2EDuration="4.498850994s" podCreationTimestamp="2025-12-05 10:35:59 +0000 UTC" firstStartedPulling="2025-12-05 10:36:00.140826533 +0000 UTC m=+506.428932047" lastFinishedPulling="2025-12-05 10:36:02.50384612 +0000 UTC m=+508.791951634" observedRunningTime="2025-12-05 10:36:03.495351058 +0000 UTC m=+509.783456571" watchObservedRunningTime="2025-12-05 10:36:03.498850994 +0000 UTC m=+509.786956507" Dec 05 10:36:03 crc kubenswrapper[4796]: I1205 10:36:03.512059 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-kgfhs" podStartSLOduration=2.165278303 podStartE2EDuration="4.512041858s" podCreationTimestamp="2025-12-05 10:35:59 +0000 UTC" firstStartedPulling="2025-12-05 10:36:00.184789626 +0000 UTC m=+506.472895139" lastFinishedPulling="2025-12-05 10:36:02.531553181 +0000 UTC m=+508.819658694" observedRunningTime="2025-12-05 10:36:03.50950862 +0000 UTC m=+509.797614134" watchObservedRunningTime="2025-12-05 10:36:03.512041858 +0000 UTC m=+509.800147371" Dec 05 10:36:05 crc kubenswrapper[4796]: I1205 10:36:05.177304 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:36:05 crc kubenswrapper[4796]: I1205 10:36:05.177360 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:36:09 crc kubenswrapper[4796]: I1205 10:36:09.789956 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-kgfhs" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.047124 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xvb5x"] Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.047869 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovn-controller" containerID="cri-o://3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae" gracePeriod=30 Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.047999 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555" gracePeriod=30 Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.048034 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="kube-rbac-proxy-node" containerID="cri-o://5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589" gracePeriod=30 Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.047989 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="sbdb" containerID="cri-o://96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85" gracePeriod=30 Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.048106 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="northd" containerID="cri-o://2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d" gracePeriod=30 Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.048125 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovn-acl-logging" containerID="cri-o://94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f" gracePeriod=30 Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.047909 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="nbdb" containerID="cri-o://159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8" gracePeriod=30 Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.074362 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovnkube-controller" containerID="cri-o://947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a" gracePeriod=30 Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.310572 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xvb5x_d158ce1c-6415-4e69-a1fe-862330b25ff3/ovnkube-controller/3.log" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.312726 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xvb5x_d158ce1c-6415-4e69-a1fe-862330b25ff3/ovn-acl-logging/0.log" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.313140 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xvb5x_d158ce1c-6415-4e69-a1fe-862330b25ff3/ovn-controller/0.log" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.313507 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.346497 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qhkk8"] Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.346823 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovn-controller" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.346891 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovn-controller" Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.346945 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovnkube-controller" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.346987 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovnkube-controller" Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.347027 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="kube-rbac-proxy-ovn-metrics" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.347073 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="kube-rbac-proxy-ovn-metrics" Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.347115 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovnkube-controller" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.347158 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovnkube-controller" Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.347209 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="northd" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.347252 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="northd" Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.347297 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="kube-rbac-proxy-node" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.347341 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="kube-rbac-proxy-node" Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.347389 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="kubecfg-setup" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.347432 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="kubecfg-setup" Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.347476 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="sbdb" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.347514 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="sbdb" Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.347559 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovnkube-controller" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.347598 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovnkube-controller" Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.347646 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovn-acl-logging" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.347722 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovn-acl-logging" Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.347781 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="nbdb" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.347822 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="nbdb" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.347941 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovn-acl-logging" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.347992 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovnkube-controller" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.348038 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovn-controller" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.348084 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="nbdb" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.348130 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="northd" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.348175 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovnkube-controller" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.348221 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="kube-rbac-proxy-ovn-metrics" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.348264 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovnkube-controller" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.348310 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="kube-rbac-proxy-node" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.348353 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="sbdb" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.348396 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovnkube-controller" Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.348509 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovnkube-controller" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.348558 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovnkube-controller" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.348675 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovnkube-controller" Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.348835 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovnkube-controller" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.348893 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerName="ovnkube-controller" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.350119 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479259 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d158ce1c-6415-4e69-a1fe-862330b25ff3-env-overrides\") pod \"d158ce1c-6415-4e69-a1fe-862330b25ff3\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479284 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-cni-bin\") pod \"d158ce1c-6415-4e69-a1fe-862330b25ff3\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479321 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-cni-netd\") pod \"d158ce1c-6415-4e69-a1fe-862330b25ff3\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479352 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d158ce1c-6415-4e69-a1fe-862330b25ff3-ovnkube-script-lib\") pod \"d158ce1c-6415-4e69-a1fe-862330b25ff3\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479368 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-run-systemd\") pod \"d158ce1c-6415-4e69-a1fe-862330b25ff3\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479392 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d158ce1c-6415-4e69-a1fe-862330b25ff3-ovnkube-config\") pod \"d158ce1c-6415-4e69-a1fe-862330b25ff3\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479406 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d158ce1c-6415-4e69-a1fe-862330b25ff3\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479427 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-slash\") pod \"d158ce1c-6415-4e69-a1fe-862330b25ff3\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479429 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d158ce1c-6415-4e69-a1fe-862330b25ff3" (UID: "d158ce1c-6415-4e69-a1fe-862330b25ff3"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479451 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22cc2\" (UniqueName: \"kubernetes.io/projected/d158ce1c-6415-4e69-a1fe-862330b25ff3-kube-api-access-22cc2\") pod \"d158ce1c-6415-4e69-a1fe-862330b25ff3\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479468 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-kubelet\") pod \"d158ce1c-6415-4e69-a1fe-862330b25ff3\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479478 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d158ce1c-6415-4e69-a1fe-862330b25ff3" (UID: "d158ce1c-6415-4e69-a1fe-862330b25ff3"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479480 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-run-ovn\") pod \"d158ce1c-6415-4e69-a1fe-862330b25ff3\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479510 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d158ce1c-6415-4e69-a1fe-862330b25ff3" (UID: "d158ce1c-6415-4e69-a1fe-862330b25ff3"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479534 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-etc-openvswitch\") pod \"d158ce1c-6415-4e69-a1fe-862330b25ff3\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479537 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-slash" (OuterVolumeSpecName: "host-slash") pod "d158ce1c-6415-4e69-a1fe-862330b25ff3" (UID: "d158ce1c-6415-4e69-a1fe-862330b25ff3"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479561 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d158ce1c-6415-4e69-a1fe-862330b25ff3-ovn-node-metrics-cert\") pod \"d158ce1c-6415-4e69-a1fe-862330b25ff3\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479576 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d158ce1c-6415-4e69-a1fe-862330b25ff3" (UID: "d158ce1c-6415-4e69-a1fe-862330b25ff3"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479577 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d158ce1c-6415-4e69-a1fe-862330b25ff3" (UID: "d158ce1c-6415-4e69-a1fe-862330b25ff3"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479583 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-log-socket\") pod \"d158ce1c-6415-4e69-a1fe-862330b25ff3\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479635 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-run-openvswitch\") pod \"d158ce1c-6415-4e69-a1fe-862330b25ff3\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479652 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-node-log\") pod \"d158ce1c-6415-4e69-a1fe-862330b25ff3\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479667 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-var-lib-openvswitch\") pod \"d158ce1c-6415-4e69-a1fe-862330b25ff3\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479701 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-run-netns\") pod \"d158ce1c-6415-4e69-a1fe-862330b25ff3\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479728 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-run-ovn-kubernetes\") pod \"d158ce1c-6415-4e69-a1fe-862330b25ff3\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479600 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-log-socket" (OuterVolumeSpecName: "log-socket") pod "d158ce1c-6415-4e69-a1fe-862330b25ff3" (UID: "d158ce1c-6415-4e69-a1fe-862330b25ff3"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479739 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d158ce1c-6415-4e69-a1fe-862330b25ff3" (UID: "d158ce1c-6415-4e69-a1fe-862330b25ff3"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479612 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d158ce1c-6415-4e69-a1fe-862330b25ff3" (UID: "d158ce1c-6415-4e69-a1fe-862330b25ff3"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479765 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d158ce1c-6415-4e69-a1fe-862330b25ff3" (UID: "d158ce1c-6415-4e69-a1fe-862330b25ff3"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479743 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-systemd-units\") pod \"d158ce1c-6415-4e69-a1fe-862330b25ff3\" (UID: \"d158ce1c-6415-4e69-a1fe-862330b25ff3\") " Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479730 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-node-log" (OuterVolumeSpecName: "node-log") pod "d158ce1c-6415-4e69-a1fe-862330b25ff3" (UID: "d158ce1c-6415-4e69-a1fe-862330b25ff3"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479762 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d158ce1c-6415-4e69-a1fe-862330b25ff3" (UID: "d158ce1c-6415-4e69-a1fe-862330b25ff3"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479777 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d158ce1c-6415-4e69-a1fe-862330b25ff3" (UID: "d158ce1c-6415-4e69-a1fe-862330b25ff3"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479804 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d158ce1c-6415-4e69-a1fe-862330b25ff3" (UID: "d158ce1c-6415-4e69-a1fe-862330b25ff3"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479903 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-host-kubelet\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479931 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-host-cni-netd\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479955 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/463196d2-d4bb-46d5-9ad3-3a8170a54d26-env-overrides\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479967 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d158ce1c-6415-4e69-a1fe-862330b25ff3-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d158ce1c-6415-4e69-a1fe-862330b25ff3" (UID: "d158ce1c-6415-4e69-a1fe-862330b25ff3"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479975 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-node-log\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.479980 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d158ce1c-6415-4e69-a1fe-862330b25ff3-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d158ce1c-6415-4e69-a1fe-862330b25ff3" (UID: "d158ce1c-6415-4e69-a1fe-862330b25ff3"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480024 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-run-openvswitch\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480045 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr9wc\" (UniqueName: \"kubernetes.io/projected/463196d2-d4bb-46d5-9ad3-3a8170a54d26-kube-api-access-mr9wc\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480111 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-host-slash\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480149 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-etc-openvswitch\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480184 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-host-run-netns\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480206 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480240 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/463196d2-d4bb-46d5-9ad3-3a8170a54d26-ovn-node-metrics-cert\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480301 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-run-ovn\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480334 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/463196d2-d4bb-46d5-9ad3-3a8170a54d26-ovnkube-script-lib\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480359 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-var-lib-openvswitch\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480383 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-systemd-units\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480399 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-log-socket\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480420 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-host-cni-bin\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480434 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/463196d2-d4bb-46d5-9ad3-3a8170a54d26-ovnkube-config\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480456 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-run-systemd\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480491 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-host-run-ovn-kubernetes\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480558 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d158ce1c-6415-4e69-a1fe-862330b25ff3-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480573 4796 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d158ce1c-6415-4e69-a1fe-862330b25ff3-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480582 4796 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480607 4796 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-slash\") on node \"crc\" DevicePath \"\"" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480616 4796 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480626 4796 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480641 4796 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480649 4796 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-log-socket\") on node \"crc\" DevicePath \"\"" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480657 4796 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480664 4796 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-node-log\") on node \"crc\" DevicePath \"\"" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480672 4796 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480679 4796 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480700 4796 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480721 4796 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480728 4796 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.480737 4796 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.481303 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d158ce1c-6415-4e69-a1fe-862330b25ff3-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d158ce1c-6415-4e69-a1fe-862330b25ff3" (UID: "d158ce1c-6415-4e69-a1fe-862330b25ff3"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.484761 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d158ce1c-6415-4e69-a1fe-862330b25ff3-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d158ce1c-6415-4e69-a1fe-862330b25ff3" (UID: "d158ce1c-6415-4e69-a1fe-862330b25ff3"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.485021 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d158ce1c-6415-4e69-a1fe-862330b25ff3-kube-api-access-22cc2" (OuterVolumeSpecName: "kube-api-access-22cc2") pod "d158ce1c-6415-4e69-a1fe-862330b25ff3" (UID: "d158ce1c-6415-4e69-a1fe-862330b25ff3"). InnerVolumeSpecName "kube-api-access-22cc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.490519 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d158ce1c-6415-4e69-a1fe-862330b25ff3" (UID: "d158ce1c-6415-4e69-a1fe-862330b25ff3"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.518260 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cqj7h_7d541e60-9b92-4b9d-be51-5bd87e76deac/kube-multus/1.log" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.519276 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cqj7h_7d541e60-9b92-4b9d-be51-5bd87e76deac/kube-multus/0.log" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.519317 4796 generic.go:334] "Generic (PLEG): container finished" podID="7d541e60-9b92-4b9d-be51-5bd87e76deac" containerID="a560e3361c3254d165aaa6d35ccfa35a96aa2453c5a9a8168abd8e442280a65f" exitCode=2 Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.519391 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cqj7h" event={"ID":"7d541e60-9b92-4b9d-be51-5bd87e76deac","Type":"ContainerDied","Data":"a560e3361c3254d165aaa6d35ccfa35a96aa2453c5a9a8168abd8e442280a65f"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.519421 4796 scope.go:117] "RemoveContainer" containerID="4c629716cd72056f13cbe69e3fdac2a85a3a5160594ddd7fedafe43cd5ce5d01" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.519890 4796 scope.go:117] "RemoveContainer" containerID="a560e3361c3254d165aaa6d35ccfa35a96aa2453c5a9a8168abd8e442280a65f" Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.520101 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-cqj7h_openshift-multus(7d541e60-9b92-4b9d-be51-5bd87e76deac)\"" pod="openshift-multus/multus-cqj7h" podUID="7d541e60-9b92-4b9d-be51-5bd87e76deac" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.523288 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xvb5x_d158ce1c-6415-4e69-a1fe-862330b25ff3/ovnkube-controller/3.log" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.525266 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xvb5x_d158ce1c-6415-4e69-a1fe-862330b25ff3/ovn-acl-logging/0.log" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.525611 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xvb5x_d158ce1c-6415-4e69-a1fe-862330b25ff3/ovn-controller/0.log" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.525932 4796 generic.go:334] "Generic (PLEG): container finished" podID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerID="947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a" exitCode=0 Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.526040 4796 generic.go:334] "Generic (PLEG): container finished" podID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerID="96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85" exitCode=0 Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.526271 4796 generic.go:334] "Generic (PLEG): container finished" podID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerID="159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8" exitCode=0 Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.526367 4796 generic.go:334] "Generic (PLEG): container finished" podID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerID="2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d" exitCode=0 Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.526453 4796 generic.go:334] "Generic (PLEG): container finished" podID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerID="d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555" exitCode=0 Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.526517 4796 generic.go:334] "Generic (PLEG): container finished" podID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerID="5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589" exitCode=0 Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.526605 4796 generic.go:334] "Generic (PLEG): container finished" podID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerID="94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f" exitCode=143 Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.526727 4796 generic.go:334] "Generic (PLEG): container finished" podID="d158ce1c-6415-4e69-a1fe-862330b25ff3" containerID="3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae" exitCode=143 Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.526015 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.525985 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerDied","Data":"947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.527050 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerDied","Data":"96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.527131 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerDied","Data":"159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.527200 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerDied","Data":"2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.527283 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerDied","Data":"d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.527356 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerDied","Data":"5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.527422 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.527488 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.527544 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.527587 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.527647 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.527742 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.527795 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.527862 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.527911 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.527976 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.528026 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerDied","Data":"94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.528100 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.528151 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.528216 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.528262 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.528324 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.528371 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.528435 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.528477 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.528533 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.528579 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.528642 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerDied","Data":"3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.528733 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.528784 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.528850 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.528897 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.528954 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.528999 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.529045 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.529105 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.529148 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.529186 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.529231 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xvb5x" event={"ID":"d158ce1c-6415-4e69-a1fe-862330b25ff3","Type":"ContainerDied","Data":"da355471901c9263fc643649db8eb7c43f691d3b41c33622a6ef20cb4bbe86de"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.529277 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.529343 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.529385 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.529429 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.529468 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.529508 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.529554 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.529597 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.529636 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.529672 4796 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe"} Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.539014 4796 scope.go:117] "RemoveContainer" containerID="947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.553931 4796 scope.go:117] "RemoveContainer" containerID="428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.556334 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xvb5x"] Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.562813 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xvb5x"] Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581310 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/463196d2-d4bb-46d5-9ad3-3a8170a54d26-ovnkube-script-lib\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581362 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-var-lib-openvswitch\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581383 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-systemd-units\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581403 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-log-socket\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581431 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-host-cni-bin\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581445 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/463196d2-d4bb-46d5-9ad3-3a8170a54d26-ovnkube-config\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581446 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-var-lib-openvswitch\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581469 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-run-systemd\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581486 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-host-run-ovn-kubernetes\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581494 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-log-socket\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581519 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-systemd-units\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581554 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-host-kubelet\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581571 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-host-cni-netd\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581595 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/463196d2-d4bb-46d5-9ad3-3a8170a54d26-env-overrides\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581612 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-node-log\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581645 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr9wc\" (UniqueName: \"kubernetes.io/projected/463196d2-d4bb-46d5-9ad3-3a8170a54d26-kube-api-access-mr9wc\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581665 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-run-openvswitch\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581705 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-host-slash\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581735 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-etc-openvswitch\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581772 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-host-run-netns\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581776 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-host-cni-bin\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581790 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581811 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/463196d2-d4bb-46d5-9ad3-3a8170a54d26-ovn-node-metrics-cert\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581826 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-run-ovn\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581869 4796 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d158ce1c-6415-4e69-a1fe-862330b25ff3-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581881 4796 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d158ce1c-6415-4e69-a1fe-862330b25ff3-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581890 4796 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d158ce1c-6415-4e69-a1fe-862330b25ff3-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581899 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22cc2\" (UniqueName: \"kubernetes.io/projected/d158ce1c-6415-4e69-a1fe-862330b25ff3-kube-api-access-22cc2\") on node \"crc\" DevicePath \"\"" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581920 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-run-ovn\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581940 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-host-kubelet\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581959 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-host-cni-netd\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.581998 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/463196d2-d4bb-46d5-9ad3-3a8170a54d26-ovnkube-script-lib\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.582044 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-host-slash\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.582072 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-node-log\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.582236 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/463196d2-d4bb-46d5-9ad3-3a8170a54d26-ovnkube-config\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.582273 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-run-systemd\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.582272 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/463196d2-d4bb-46d5-9ad3-3a8170a54d26-env-overrides\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.582292 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-etc-openvswitch\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.582304 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-host-run-ovn-kubernetes\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.582324 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-run-openvswitch\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.582342 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.582363 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/463196d2-d4bb-46d5-9ad3-3a8170a54d26-host-run-netns\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.582429 4796 scope.go:117] "RemoveContainer" containerID="96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.584759 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/463196d2-d4bb-46d5-9ad3-3a8170a54d26-ovn-node-metrics-cert\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.593290 4796 scope.go:117] "RemoveContainer" containerID="159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.595404 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr9wc\" (UniqueName: \"kubernetes.io/projected/463196d2-d4bb-46d5-9ad3-3a8170a54d26-kube-api-access-mr9wc\") pod \"ovnkube-node-qhkk8\" (UID: \"463196d2-d4bb-46d5-9ad3-3a8170a54d26\") " pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.606491 4796 scope.go:117] "RemoveContainer" containerID="2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.617703 4796 scope.go:117] "RemoveContainer" containerID="d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.628738 4796 scope.go:117] "RemoveContainer" containerID="5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.638848 4796 scope.go:117] "RemoveContainer" containerID="94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.647401 4796 scope.go:117] "RemoveContainer" containerID="3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.656664 4796 scope.go:117] "RemoveContainer" containerID="b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.660834 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.664901 4796 scope.go:117] "RemoveContainer" containerID="947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a" Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.665176 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a\": container with ID starting with 947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a not found: ID does not exist" containerID="947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.665208 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a"} err="failed to get container status \"947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a\": rpc error: code = NotFound desc = could not find container \"947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a\": container with ID starting with 947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.665231 4796 scope.go:117] "RemoveContainer" containerID="428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb" Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.665586 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\": container with ID starting with 428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb not found: ID does not exist" containerID="428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.665659 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb"} err="failed to get container status \"428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\": rpc error: code = NotFound desc = could not find container \"428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\": container with ID starting with 428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.665761 4796 scope.go:117] "RemoveContainer" containerID="96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85" Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.666152 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\": container with ID starting with 96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85 not found: ID does not exist" containerID="96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.666187 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85"} err="failed to get container status \"96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\": rpc error: code = NotFound desc = could not find container \"96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\": container with ID starting with 96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85 not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.666209 4796 scope.go:117] "RemoveContainer" containerID="159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8" Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.666428 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\": container with ID starting with 159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8 not found: ID does not exist" containerID="159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.666451 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8"} err="failed to get container status \"159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\": rpc error: code = NotFound desc = could not find container \"159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\": container with ID starting with 159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8 not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.666467 4796 scope.go:117] "RemoveContainer" containerID="2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d" Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.666659 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\": container with ID starting with 2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d not found: ID does not exist" containerID="2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.666679 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d"} err="failed to get container status \"2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\": rpc error: code = NotFound desc = could not find container \"2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\": container with ID starting with 2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.666705 4796 scope.go:117] "RemoveContainer" containerID="d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555" Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.666938 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\": container with ID starting with d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555 not found: ID does not exist" containerID="d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.666980 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555"} err="failed to get container status \"d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\": rpc error: code = NotFound desc = could not find container \"d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\": container with ID starting with d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555 not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.667006 4796 scope.go:117] "RemoveContainer" containerID="5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589" Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.667246 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\": container with ID starting with 5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589 not found: ID does not exist" containerID="5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.667275 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589"} err="failed to get container status \"5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\": rpc error: code = NotFound desc = could not find container \"5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\": container with ID starting with 5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589 not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.667292 4796 scope.go:117] "RemoveContainer" containerID="94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f" Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.667521 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\": container with ID starting with 94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f not found: ID does not exist" containerID="94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.667556 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f"} err="failed to get container status \"94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\": rpc error: code = NotFound desc = could not find container \"94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\": container with ID starting with 94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.667577 4796 scope.go:117] "RemoveContainer" containerID="3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae" Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.667873 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\": container with ID starting with 3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae not found: ID does not exist" containerID="3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.667912 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae"} err="failed to get container status \"3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\": rpc error: code = NotFound desc = could not find container \"3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\": container with ID starting with 3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.667930 4796 scope.go:117] "RemoveContainer" containerID="b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe" Dec 05 10:36:11 crc kubenswrapper[4796]: E1205 10:36:11.668153 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\": container with ID starting with b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe not found: ID does not exist" containerID="b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.668178 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe"} err="failed to get container status \"b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\": rpc error: code = NotFound desc = could not find container \"b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\": container with ID starting with b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.668194 4796 scope.go:117] "RemoveContainer" containerID="947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.668411 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a"} err="failed to get container status \"947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a\": rpc error: code = NotFound desc = could not find container \"947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a\": container with ID starting with 947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.668440 4796 scope.go:117] "RemoveContainer" containerID="428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.668779 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb"} err="failed to get container status \"428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\": rpc error: code = NotFound desc = could not find container \"428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\": container with ID starting with 428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.668815 4796 scope.go:117] "RemoveContainer" containerID="96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.669047 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85"} err="failed to get container status \"96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\": rpc error: code = NotFound desc = could not find container \"96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\": container with ID starting with 96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85 not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.669070 4796 scope.go:117] "RemoveContainer" containerID="159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.669250 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8"} err="failed to get container status \"159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\": rpc error: code = NotFound desc = could not find container \"159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\": container with ID starting with 159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8 not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.669271 4796 scope.go:117] "RemoveContainer" containerID="2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.669462 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d"} err="failed to get container status \"2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\": rpc error: code = NotFound desc = could not find container \"2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\": container with ID starting with 2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.669480 4796 scope.go:117] "RemoveContainer" containerID="d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.669664 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555"} err="failed to get container status \"d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\": rpc error: code = NotFound desc = could not find container \"d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\": container with ID starting with d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555 not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.669697 4796 scope.go:117] "RemoveContainer" containerID="5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.669940 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589"} err="failed to get container status \"5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\": rpc error: code = NotFound desc = could not find container \"5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\": container with ID starting with 5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589 not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.669968 4796 scope.go:117] "RemoveContainer" containerID="94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.670220 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f"} err="failed to get container status \"94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\": rpc error: code = NotFound desc = could not find container \"94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\": container with ID starting with 94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.670245 4796 scope.go:117] "RemoveContainer" containerID="3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.671445 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae"} err="failed to get container status \"3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\": rpc error: code = NotFound desc = could not find container \"3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\": container with ID starting with 3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.671491 4796 scope.go:117] "RemoveContainer" containerID="b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.671763 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe"} err="failed to get container status \"b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\": rpc error: code = NotFound desc = could not find container \"b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\": container with ID starting with b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.671784 4796 scope.go:117] "RemoveContainer" containerID="947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.672038 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a"} err="failed to get container status \"947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a\": rpc error: code = NotFound desc = could not find container \"947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a\": container with ID starting with 947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.672063 4796 scope.go:117] "RemoveContainer" containerID="428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.672332 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb"} err="failed to get container status \"428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\": rpc error: code = NotFound desc = could not find container \"428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\": container with ID starting with 428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.672354 4796 scope.go:117] "RemoveContainer" containerID="96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.672590 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85"} err="failed to get container status \"96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\": rpc error: code = NotFound desc = could not find container \"96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\": container with ID starting with 96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85 not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.672607 4796 scope.go:117] "RemoveContainer" containerID="159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.672838 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8"} err="failed to get container status \"159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\": rpc error: code = NotFound desc = could not find container \"159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\": container with ID starting with 159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8 not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.672863 4796 scope.go:117] "RemoveContainer" containerID="2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.673076 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d"} err="failed to get container status \"2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\": rpc error: code = NotFound desc = could not find container \"2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\": container with ID starting with 2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.673094 4796 scope.go:117] "RemoveContainer" containerID="d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.673295 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555"} err="failed to get container status \"d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\": rpc error: code = NotFound desc = could not find container \"d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\": container with ID starting with d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555 not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.673319 4796 scope.go:117] "RemoveContainer" containerID="5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.673544 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589"} err="failed to get container status \"5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\": rpc error: code = NotFound desc = could not find container \"5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\": container with ID starting with 5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589 not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.673564 4796 scope.go:117] "RemoveContainer" containerID="94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.673797 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f"} err="failed to get container status \"94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\": rpc error: code = NotFound desc = could not find container \"94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\": container with ID starting with 94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.673828 4796 scope.go:117] "RemoveContainer" containerID="3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.674016 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae"} err="failed to get container status \"3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\": rpc error: code = NotFound desc = could not find container \"3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\": container with ID starting with 3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.674040 4796 scope.go:117] "RemoveContainer" containerID="b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.674299 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe"} err="failed to get container status \"b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\": rpc error: code = NotFound desc = could not find container \"b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\": container with ID starting with b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.674327 4796 scope.go:117] "RemoveContainer" containerID="947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.674588 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a"} err="failed to get container status \"947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a\": rpc error: code = NotFound desc = could not find container \"947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a\": container with ID starting with 947dc84104b89d05f573200ebf6d9313682a96a3633f08bf61422cc5a85f172a not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.674610 4796 scope.go:117] "RemoveContainer" containerID="428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.674857 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb"} err="failed to get container status \"428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\": rpc error: code = NotFound desc = could not find container \"428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb\": container with ID starting with 428bf32bdde7e5440be43acd3cb1220255e4629d689eda7959f2b2b35afe63cb not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.674886 4796 scope.go:117] "RemoveContainer" containerID="96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.675092 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85"} err="failed to get container status \"96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\": rpc error: code = NotFound desc = could not find container \"96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85\": container with ID starting with 96696bd003b3b57ca276ca6570d22618f9d6f62785ad91f7138dbe5a7addde85 not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.675122 4796 scope.go:117] "RemoveContainer" containerID="159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.675398 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8"} err="failed to get container status \"159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\": rpc error: code = NotFound desc = could not find container \"159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8\": container with ID starting with 159b661f315659e3e19d782bb4d69df7ddb72500b036d0f0b03f49c99f59baa8 not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.675427 4796 scope.go:117] "RemoveContainer" containerID="2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.675624 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d"} err="failed to get container status \"2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\": rpc error: code = NotFound desc = could not find container \"2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d\": container with ID starting with 2ed86778cf4063409feb4da889c943efd86dc41e2e0758e57da4e39a4ce3983d not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.675641 4796 scope.go:117] "RemoveContainer" containerID="d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.675834 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555"} err="failed to get container status \"d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\": rpc error: code = NotFound desc = could not find container \"d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555\": container with ID starting with d4f0311d4167f0922f3abc46ea6c70da9b67ab9dcc9ca5d69a198030121ce555 not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.675860 4796 scope.go:117] "RemoveContainer" containerID="5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.676068 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589"} err="failed to get container status \"5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\": rpc error: code = NotFound desc = could not find container \"5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589\": container with ID starting with 5d68db2fd74f57040b0f0915cff75209c21b266a491cd6694029fde6dda37589 not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.676089 4796 scope.go:117] "RemoveContainer" containerID="94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.676266 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f"} err="failed to get container status \"94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\": rpc error: code = NotFound desc = could not find container \"94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f\": container with ID starting with 94e88b6483a6121dd81bc9fad3a7fe94d65290682bc5ab6c96695ad75d8f222f not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.676284 4796 scope.go:117] "RemoveContainer" containerID="3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.676432 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae"} err="failed to get container status \"3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\": rpc error: code = NotFound desc = could not find container \"3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae\": container with ID starting with 3a4be9cefdc9b2bedf85b4ca0489d529367a77d6d79df067d55b9d942a0be6ae not found: ID does not exist" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.676448 4796 scope.go:117] "RemoveContainer" containerID="b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe" Dec 05 10:36:11 crc kubenswrapper[4796]: I1205 10:36:11.676875 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe"} err="failed to get container status \"b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\": rpc error: code = NotFound desc = could not find container \"b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe\": container with ID starting with b4d85cd22f17aa7e3a270e2d49e3698d4dffd4155af63fcbc9411d66c5aa01fe not found: ID does not exist" Dec 05 10:36:12 crc kubenswrapper[4796]: I1205 10:36:12.035326 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d158ce1c-6415-4e69-a1fe-862330b25ff3" path="/var/lib/kubelet/pods/d158ce1c-6415-4e69-a1fe-862330b25ff3/volumes" Dec 05 10:36:12 crc kubenswrapper[4796]: I1205 10:36:12.532059 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cqj7h_7d541e60-9b92-4b9d-be51-5bd87e76deac/kube-multus/1.log" Dec 05 10:36:12 crc kubenswrapper[4796]: I1205 10:36:12.533337 4796 generic.go:334] "Generic (PLEG): container finished" podID="463196d2-d4bb-46d5-9ad3-3a8170a54d26" containerID="bd9186668c16d1c407dc5ff5ac4cbc4b62844ae7784f5dd0a87f18dcc25b161d" exitCode=0 Dec 05 10:36:12 crc kubenswrapper[4796]: I1205 10:36:12.533395 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" event={"ID":"463196d2-d4bb-46d5-9ad3-3a8170a54d26","Type":"ContainerDied","Data":"bd9186668c16d1c407dc5ff5ac4cbc4b62844ae7784f5dd0a87f18dcc25b161d"} Dec 05 10:36:12 crc kubenswrapper[4796]: I1205 10:36:12.533420 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" event={"ID":"463196d2-d4bb-46d5-9ad3-3a8170a54d26","Type":"ContainerStarted","Data":"dff64f2b5887afb60797970ea5ed6431b61918f56517319762de527086f6335b"} Dec 05 10:36:13 crc kubenswrapper[4796]: I1205 10:36:13.540778 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" event={"ID":"463196d2-d4bb-46d5-9ad3-3a8170a54d26","Type":"ContainerStarted","Data":"0b6fdf544753516c843edfc5dbb1b9d62290093668fffc5578d50d9d647616fc"} Dec 05 10:36:13 crc kubenswrapper[4796]: I1205 10:36:13.541219 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" event={"ID":"463196d2-d4bb-46d5-9ad3-3a8170a54d26","Type":"ContainerStarted","Data":"4f5222d404cf3c2101fe1fe4fa63e3ca0b4ff1fb6db605228d06da68e969965d"} Dec 05 10:36:13 crc kubenswrapper[4796]: I1205 10:36:13.541237 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" event={"ID":"463196d2-d4bb-46d5-9ad3-3a8170a54d26","Type":"ContainerStarted","Data":"1b1b9db1663a1d12e42fc472d61131320b7f23cb226ce8e93f1e39eb60a23984"} Dec 05 10:36:13 crc kubenswrapper[4796]: I1205 10:36:13.541251 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" event={"ID":"463196d2-d4bb-46d5-9ad3-3a8170a54d26","Type":"ContainerStarted","Data":"c36f8dfadd79add888d3773eeb16ae7c1d9c7cbc17df910fd495fa7d1c4fb422"} Dec 05 10:36:13 crc kubenswrapper[4796]: I1205 10:36:13.541260 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" event={"ID":"463196d2-d4bb-46d5-9ad3-3a8170a54d26","Type":"ContainerStarted","Data":"e2d58fab32d37656adaf8f4c4b44953ca6202bd43ae27c70bd17ab0638ff6e3c"} Dec 05 10:36:13 crc kubenswrapper[4796]: I1205 10:36:13.541271 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" event={"ID":"463196d2-d4bb-46d5-9ad3-3a8170a54d26","Type":"ContainerStarted","Data":"d03a63db5edd0e328ecb4510e1afafa52396a3179e0c196895a097feb49bf7be"} Dec 05 10:36:15 crc kubenswrapper[4796]: I1205 10:36:15.554254 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" event={"ID":"463196d2-d4bb-46d5-9ad3-3a8170a54d26","Type":"ContainerStarted","Data":"14cfd8275799cfd7d04b654b899e8d4cc4466bf09644918f8bfe4e488db07883"} Dec 05 10:36:17 crc kubenswrapper[4796]: I1205 10:36:17.564487 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" event={"ID":"463196d2-d4bb-46d5-9ad3-3a8170a54d26","Type":"ContainerStarted","Data":"da732d8748857e8a7f098dfc5a58794573fb745b708c8ca00b73a8877ea2f293"} Dec 05 10:36:17 crc kubenswrapper[4796]: I1205 10:36:17.564923 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:17 crc kubenswrapper[4796]: I1205 10:36:17.564935 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:17 crc kubenswrapper[4796]: I1205 10:36:17.583951 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:17 crc kubenswrapper[4796]: I1205 10:36:17.592221 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" podStartSLOduration=6.59220744 podStartE2EDuration="6.59220744s" podCreationTimestamp="2025-12-05 10:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:36:17.590734608 +0000 UTC m=+523.878840121" watchObservedRunningTime="2025-12-05 10:36:17.59220744 +0000 UTC m=+523.880312953" Dec 05 10:36:18 crc kubenswrapper[4796]: I1205 10:36:18.570782 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:18 crc kubenswrapper[4796]: I1205 10:36:18.592611 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:26 crc kubenswrapper[4796]: I1205 10:36:26.030987 4796 scope.go:117] "RemoveContainer" containerID="a560e3361c3254d165aaa6d35ccfa35a96aa2453c5a9a8168abd8e442280a65f" Dec 05 10:36:26 crc kubenswrapper[4796]: I1205 10:36:26.617674 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-cqj7h_7d541e60-9b92-4b9d-be51-5bd87e76deac/kube-multus/1.log" Dec 05 10:36:26 crc kubenswrapper[4796]: I1205 10:36:26.617971 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-cqj7h" event={"ID":"7d541e60-9b92-4b9d-be51-5bd87e76deac","Type":"ContainerStarted","Data":"20cf1428d7f0cf1b87729103ee29453a4b933b10f23b94c18c2a1a8b60034f3d"} Dec 05 10:36:35 crc kubenswrapper[4796]: I1205 10:36:35.177743 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:36:35 crc kubenswrapper[4796]: I1205 10:36:35.178160 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:36:38 crc kubenswrapper[4796]: I1205 10:36:38.946042 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd"] Dec 05 10:36:38 crc kubenswrapper[4796]: I1205 10:36:38.947064 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd" Dec 05 10:36:38 crc kubenswrapper[4796]: I1205 10:36:38.948785 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 05 10:36:38 crc kubenswrapper[4796]: I1205 10:36:38.952750 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd"] Dec 05 10:36:38 crc kubenswrapper[4796]: I1205 10:36:38.956626 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f393972a-bae2-4076-bc21-2f9f67d12875-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd\" (UID: \"f393972a-bae2-4076-bc21-2f9f67d12875\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd" Dec 05 10:36:39 crc kubenswrapper[4796]: I1205 10:36:39.058299 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f393972a-bae2-4076-bc21-2f9f67d12875-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd\" (UID: \"f393972a-bae2-4076-bc21-2f9f67d12875\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd" Dec 05 10:36:39 crc kubenswrapper[4796]: I1205 10:36:39.058581 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f393972a-bae2-4076-bc21-2f9f67d12875-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd\" (UID: \"f393972a-bae2-4076-bc21-2f9f67d12875\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd" Dec 05 10:36:39 crc kubenswrapper[4796]: I1205 10:36:39.058628 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lctpg\" (UniqueName: \"kubernetes.io/projected/f393972a-bae2-4076-bc21-2f9f67d12875-kube-api-access-lctpg\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd\" (UID: \"f393972a-bae2-4076-bc21-2f9f67d12875\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd" Dec 05 10:36:39 crc kubenswrapper[4796]: I1205 10:36:39.059034 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f393972a-bae2-4076-bc21-2f9f67d12875-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd\" (UID: \"f393972a-bae2-4076-bc21-2f9f67d12875\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd" Dec 05 10:36:39 crc kubenswrapper[4796]: I1205 10:36:39.159565 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f393972a-bae2-4076-bc21-2f9f67d12875-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd\" (UID: \"f393972a-bae2-4076-bc21-2f9f67d12875\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd" Dec 05 10:36:39 crc kubenswrapper[4796]: I1205 10:36:39.159644 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lctpg\" (UniqueName: \"kubernetes.io/projected/f393972a-bae2-4076-bc21-2f9f67d12875-kube-api-access-lctpg\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd\" (UID: \"f393972a-bae2-4076-bc21-2f9f67d12875\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd" Dec 05 10:36:39 crc kubenswrapper[4796]: I1205 10:36:39.159971 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f393972a-bae2-4076-bc21-2f9f67d12875-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd\" (UID: \"f393972a-bae2-4076-bc21-2f9f67d12875\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd" Dec 05 10:36:39 crc kubenswrapper[4796]: I1205 10:36:39.173446 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lctpg\" (UniqueName: \"kubernetes.io/projected/f393972a-bae2-4076-bc21-2f9f67d12875-kube-api-access-lctpg\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd\" (UID: \"f393972a-bae2-4076-bc21-2f9f67d12875\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd" Dec 05 10:36:39 crc kubenswrapper[4796]: I1205 10:36:39.265156 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd" Dec 05 10:36:39 crc kubenswrapper[4796]: I1205 10:36:39.589784 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd"] Dec 05 10:36:39 crc kubenswrapper[4796]: W1205 10:36:39.593666 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf393972a_bae2_4076_bc21_2f9f67d12875.slice/crio-cc30b98f5162fa466130332f38ab653b114150e4fb4ae44f325b90516a205f60 WatchSource:0}: Error finding container cc30b98f5162fa466130332f38ab653b114150e4fb4ae44f325b90516a205f60: Status 404 returned error can't find the container with id cc30b98f5162fa466130332f38ab653b114150e4fb4ae44f325b90516a205f60 Dec 05 10:36:39 crc kubenswrapper[4796]: I1205 10:36:39.678358 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd" event={"ID":"f393972a-bae2-4076-bc21-2f9f67d12875","Type":"ContainerStarted","Data":"ef3fcf03b09b3e50a25b46f26562dff72f1aab48519e940f7017f51f98cd6a51"} Dec 05 10:36:39 crc kubenswrapper[4796]: I1205 10:36:39.678393 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd" event={"ID":"f393972a-bae2-4076-bc21-2f9f67d12875","Type":"ContainerStarted","Data":"cc30b98f5162fa466130332f38ab653b114150e4fb4ae44f325b90516a205f60"} Dec 05 10:36:40 crc kubenswrapper[4796]: I1205 10:36:40.682802 4796 generic.go:334] "Generic (PLEG): container finished" podID="f393972a-bae2-4076-bc21-2f9f67d12875" containerID="ef3fcf03b09b3e50a25b46f26562dff72f1aab48519e940f7017f51f98cd6a51" exitCode=0 Dec 05 10:36:40 crc kubenswrapper[4796]: I1205 10:36:40.682838 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd" event={"ID":"f393972a-bae2-4076-bc21-2f9f67d12875","Type":"ContainerDied","Data":"ef3fcf03b09b3e50a25b46f26562dff72f1aab48519e940f7017f51f98cd6a51"} Dec 05 10:36:41 crc kubenswrapper[4796]: I1205 10:36:41.677325 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qhkk8" Dec 05 10:36:42 crc kubenswrapper[4796]: I1205 10:36:42.691665 4796 generic.go:334] "Generic (PLEG): container finished" podID="f393972a-bae2-4076-bc21-2f9f67d12875" containerID="fa1cab808a16680118306dc096045e9cfb506fe3dba8669785b452ddc7cc2cda" exitCode=0 Dec 05 10:36:42 crc kubenswrapper[4796]: I1205 10:36:42.691701 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd" event={"ID":"f393972a-bae2-4076-bc21-2f9f67d12875","Type":"ContainerDied","Data":"fa1cab808a16680118306dc096045e9cfb506fe3dba8669785b452ddc7cc2cda"} Dec 05 10:36:43 crc kubenswrapper[4796]: I1205 10:36:43.698185 4796 generic.go:334] "Generic (PLEG): container finished" podID="f393972a-bae2-4076-bc21-2f9f67d12875" containerID="c11a6305278d25225be73807f5c8ae1d73012518e749419b71c8e43e64b379df" exitCode=0 Dec 05 10:36:43 crc kubenswrapper[4796]: I1205 10:36:43.698227 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd" event={"ID":"f393972a-bae2-4076-bc21-2f9f67d12875","Type":"ContainerDied","Data":"c11a6305278d25225be73807f5c8ae1d73012518e749419b71c8e43e64b379df"} Dec 05 10:36:44 crc kubenswrapper[4796]: I1205 10:36:44.857133 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd" Dec 05 10:36:45 crc kubenswrapper[4796]: I1205 10:36:45.009303 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lctpg\" (UniqueName: \"kubernetes.io/projected/f393972a-bae2-4076-bc21-2f9f67d12875-kube-api-access-lctpg\") pod \"f393972a-bae2-4076-bc21-2f9f67d12875\" (UID: \"f393972a-bae2-4076-bc21-2f9f67d12875\") " Dec 05 10:36:45 crc kubenswrapper[4796]: I1205 10:36:45.009362 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f393972a-bae2-4076-bc21-2f9f67d12875-util\") pod \"f393972a-bae2-4076-bc21-2f9f67d12875\" (UID: \"f393972a-bae2-4076-bc21-2f9f67d12875\") " Dec 05 10:36:45 crc kubenswrapper[4796]: I1205 10:36:45.009403 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f393972a-bae2-4076-bc21-2f9f67d12875-bundle\") pod \"f393972a-bae2-4076-bc21-2f9f67d12875\" (UID: \"f393972a-bae2-4076-bc21-2f9f67d12875\") " Dec 05 10:36:45 crc kubenswrapper[4796]: I1205 10:36:45.009851 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f393972a-bae2-4076-bc21-2f9f67d12875-bundle" (OuterVolumeSpecName: "bundle") pod "f393972a-bae2-4076-bc21-2f9f67d12875" (UID: "f393972a-bae2-4076-bc21-2f9f67d12875"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:36:45 crc kubenswrapper[4796]: I1205 10:36:45.013456 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f393972a-bae2-4076-bc21-2f9f67d12875-kube-api-access-lctpg" (OuterVolumeSpecName: "kube-api-access-lctpg") pod "f393972a-bae2-4076-bc21-2f9f67d12875" (UID: "f393972a-bae2-4076-bc21-2f9f67d12875"). InnerVolumeSpecName "kube-api-access-lctpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:36:45 crc kubenswrapper[4796]: I1205 10:36:45.016674 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f393972a-bae2-4076-bc21-2f9f67d12875-util" (OuterVolumeSpecName: "util") pod "f393972a-bae2-4076-bc21-2f9f67d12875" (UID: "f393972a-bae2-4076-bc21-2f9f67d12875"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:36:45 crc kubenswrapper[4796]: I1205 10:36:45.110236 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lctpg\" (UniqueName: \"kubernetes.io/projected/f393972a-bae2-4076-bc21-2f9f67d12875-kube-api-access-lctpg\") on node \"crc\" DevicePath \"\"" Dec 05 10:36:45 crc kubenswrapper[4796]: I1205 10:36:45.110259 4796 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f393972a-bae2-4076-bc21-2f9f67d12875-util\") on node \"crc\" DevicePath \"\"" Dec 05 10:36:45 crc kubenswrapper[4796]: I1205 10:36:45.110268 4796 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f393972a-bae2-4076-bc21-2f9f67d12875-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:36:45 crc kubenswrapper[4796]: I1205 10:36:45.707045 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd" event={"ID":"f393972a-bae2-4076-bc21-2f9f67d12875","Type":"ContainerDied","Data":"cc30b98f5162fa466130332f38ab653b114150e4fb4ae44f325b90516a205f60"} Dec 05 10:36:45 crc kubenswrapper[4796]: I1205 10:36:45.707079 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc30b98f5162fa466130332f38ab653b114150e4fb4ae44f325b90516a205f60" Dec 05 10:36:45 crc kubenswrapper[4796]: I1205 10:36:45.707079 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd" Dec 05 10:36:46 crc kubenswrapper[4796]: I1205 10:36:46.970154 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-8pj5q"] Dec 05 10:36:46 crc kubenswrapper[4796]: E1205 10:36:46.970344 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f393972a-bae2-4076-bc21-2f9f67d12875" containerName="pull" Dec 05 10:36:46 crc kubenswrapper[4796]: I1205 10:36:46.970356 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f393972a-bae2-4076-bc21-2f9f67d12875" containerName="pull" Dec 05 10:36:46 crc kubenswrapper[4796]: E1205 10:36:46.970371 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f393972a-bae2-4076-bc21-2f9f67d12875" containerName="extract" Dec 05 10:36:46 crc kubenswrapper[4796]: I1205 10:36:46.970377 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f393972a-bae2-4076-bc21-2f9f67d12875" containerName="extract" Dec 05 10:36:46 crc kubenswrapper[4796]: E1205 10:36:46.970386 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f393972a-bae2-4076-bc21-2f9f67d12875" containerName="util" Dec 05 10:36:46 crc kubenswrapper[4796]: I1205 10:36:46.970391 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f393972a-bae2-4076-bc21-2f9f67d12875" containerName="util" Dec 05 10:36:46 crc kubenswrapper[4796]: I1205 10:36:46.970486 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f393972a-bae2-4076-bc21-2f9f67d12875" containerName="extract" Dec 05 10:36:46 crc kubenswrapper[4796]: I1205 10:36:46.970801 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-8pj5q" Dec 05 10:36:46 crc kubenswrapper[4796]: I1205 10:36:46.972317 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 05 10:36:46 crc kubenswrapper[4796]: I1205 10:36:46.972338 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 05 10:36:46 crc kubenswrapper[4796]: I1205 10:36:46.972464 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-pccvd" Dec 05 10:36:46 crc kubenswrapper[4796]: I1205 10:36:46.980716 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-8pj5q"] Dec 05 10:36:47 crc kubenswrapper[4796]: I1205 10:36:47.128915 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpxtj\" (UniqueName: \"kubernetes.io/projected/f9217359-3b50-4237-bff8-75ff7eebf333-kube-api-access-cpxtj\") pod \"nmstate-operator-5b5b58f5c8-8pj5q\" (UID: \"f9217359-3b50-4237-bff8-75ff7eebf333\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-8pj5q" Dec 05 10:36:47 crc kubenswrapper[4796]: I1205 10:36:47.230296 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpxtj\" (UniqueName: \"kubernetes.io/projected/f9217359-3b50-4237-bff8-75ff7eebf333-kube-api-access-cpxtj\") pod \"nmstate-operator-5b5b58f5c8-8pj5q\" (UID: \"f9217359-3b50-4237-bff8-75ff7eebf333\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-8pj5q" Dec 05 10:36:47 crc kubenswrapper[4796]: I1205 10:36:47.244622 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpxtj\" (UniqueName: \"kubernetes.io/projected/f9217359-3b50-4237-bff8-75ff7eebf333-kube-api-access-cpxtj\") pod \"nmstate-operator-5b5b58f5c8-8pj5q\" (UID: \"f9217359-3b50-4237-bff8-75ff7eebf333\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-8pj5q" Dec 05 10:36:47 crc kubenswrapper[4796]: I1205 10:36:47.282236 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-8pj5q" Dec 05 10:36:47 crc kubenswrapper[4796]: I1205 10:36:47.408219 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-8pj5q"] Dec 05 10:36:47 crc kubenswrapper[4796]: W1205 10:36:47.412455 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9217359_3b50_4237_bff8_75ff7eebf333.slice/crio-d3b6fafa4e8e06dd0ecffb49127702e46e5efe754bcb5f2c8062f350f14326a6 WatchSource:0}: Error finding container d3b6fafa4e8e06dd0ecffb49127702e46e5efe754bcb5f2c8062f350f14326a6: Status 404 returned error can't find the container with id d3b6fafa4e8e06dd0ecffb49127702e46e5efe754bcb5f2c8062f350f14326a6 Dec 05 10:36:47 crc kubenswrapper[4796]: I1205 10:36:47.714855 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-8pj5q" event={"ID":"f9217359-3b50-4237-bff8-75ff7eebf333","Type":"ContainerStarted","Data":"d3b6fafa4e8e06dd0ecffb49127702e46e5efe754bcb5f2c8062f350f14326a6"} Dec 05 10:36:49 crc kubenswrapper[4796]: I1205 10:36:49.722300 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-8pj5q" event={"ID":"f9217359-3b50-4237-bff8-75ff7eebf333","Type":"ContainerStarted","Data":"ae4ae063d596e19a7fc457d64609e18204334487ffad54fc516e3ae9d2b5fe42"} Dec 05 10:36:49 crc kubenswrapper[4796]: I1205 10:36:49.734474 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-8pj5q" podStartSLOduration=1.7044720180000001 podStartE2EDuration="3.734461449s" podCreationTimestamp="2025-12-05 10:36:46 +0000 UTC" firstStartedPulling="2025-12-05 10:36:47.41364246 +0000 UTC m=+553.701747963" lastFinishedPulling="2025-12-05 10:36:49.443631881 +0000 UTC m=+555.731737394" observedRunningTime="2025-12-05 10:36:49.731749074 +0000 UTC m=+556.019854587" watchObservedRunningTime="2025-12-05 10:36:49.734461449 +0000 UTC m=+556.022566962" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.453510 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-wlc9g"] Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.454237 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-wlc9g" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.460615 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-4s9tg" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.470070 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8q8q\" (UniqueName: \"kubernetes.io/projected/6dbc6bec-7e30-4b6e-9db8-b0ea8f210baa-kube-api-access-x8q8q\") pod \"nmstate-metrics-7f946cbc9-wlc9g\" (UID: \"6dbc6bec-7e30-4b6e-9db8-b0ea8f210baa\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-wlc9g" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.471597 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-sflfc"] Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.472221 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-sflfc" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.473487 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.474043 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-wlc9g"] Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.480377 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-nmcxh"] Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.480928 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-nmcxh" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.486147 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-sflfc"] Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.570857 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3d6ead77-e785-46e2-a9d8-1cb1bf83ae85-ovs-socket\") pod \"nmstate-handler-nmcxh\" (UID: \"3d6ead77-e785-46e2-a9d8-1cb1bf83ae85\") " pod="openshift-nmstate/nmstate-handler-nmcxh" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.570896 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv97h\" (UniqueName: \"kubernetes.io/projected/3d6ead77-e785-46e2-a9d8-1cb1bf83ae85-kube-api-access-sv97h\") pod \"nmstate-handler-nmcxh\" (UID: \"3d6ead77-e785-46e2-a9d8-1cb1bf83ae85\") " pod="openshift-nmstate/nmstate-handler-nmcxh" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.570922 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8q8q\" (UniqueName: \"kubernetes.io/projected/6dbc6bec-7e30-4b6e-9db8-b0ea8f210baa-kube-api-access-x8q8q\") pod \"nmstate-metrics-7f946cbc9-wlc9g\" (UID: \"6dbc6bec-7e30-4b6e-9db8-b0ea8f210baa\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-wlc9g" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.571085 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3d6ead77-e785-46e2-a9d8-1cb1bf83ae85-nmstate-lock\") pod \"nmstate-handler-nmcxh\" (UID: \"3d6ead77-e785-46e2-a9d8-1cb1bf83ae85\") " pod="openshift-nmstate/nmstate-handler-nmcxh" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.571133 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl6wv\" (UniqueName: \"kubernetes.io/projected/b69ee833-32fa-4d5f-b561-38b4e4a89a58-kube-api-access-sl6wv\") pod \"nmstate-webhook-5f6d4c5ccb-sflfc\" (UID: \"b69ee833-32fa-4d5f-b561-38b4e4a89a58\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-sflfc" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.571169 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b69ee833-32fa-4d5f-b561-38b4e4a89a58-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-sflfc\" (UID: \"b69ee833-32fa-4d5f-b561-38b4e4a89a58\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-sflfc" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.571183 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3d6ead77-e785-46e2-a9d8-1cb1bf83ae85-dbus-socket\") pod \"nmstate-handler-nmcxh\" (UID: \"3d6ead77-e785-46e2-a9d8-1cb1bf83ae85\") " pod="openshift-nmstate/nmstate-handler-nmcxh" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.586351 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8q8q\" (UniqueName: \"kubernetes.io/projected/6dbc6bec-7e30-4b6e-9db8-b0ea8f210baa-kube-api-access-x8q8q\") pod \"nmstate-metrics-7f946cbc9-wlc9g\" (UID: \"6dbc6bec-7e30-4b6e-9db8-b0ea8f210baa\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-wlc9g" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.621390 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7zkrr"] Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.623166 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7zkrr" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.629299 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.629349 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-zk88p" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.634419 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.638889 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7zkrr"] Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.671708 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5df58dfa-f2af-439b-bd54-5253e8804e10-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-7zkrr\" (UID: \"5df58dfa-f2af-439b-bd54-5253e8804e10\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7zkrr" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.671757 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3d6ead77-e785-46e2-a9d8-1cb1bf83ae85-ovs-socket\") pod \"nmstate-handler-nmcxh\" (UID: \"3d6ead77-e785-46e2-a9d8-1cb1bf83ae85\") " pod="openshift-nmstate/nmstate-handler-nmcxh" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.671818 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3d6ead77-e785-46e2-a9d8-1cb1bf83ae85-ovs-socket\") pod \"nmstate-handler-nmcxh\" (UID: \"3d6ead77-e785-46e2-a9d8-1cb1bf83ae85\") " pod="openshift-nmstate/nmstate-handler-nmcxh" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.671862 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv97h\" (UniqueName: \"kubernetes.io/projected/3d6ead77-e785-46e2-a9d8-1cb1bf83ae85-kube-api-access-sv97h\") pod \"nmstate-handler-nmcxh\" (UID: \"3d6ead77-e785-46e2-a9d8-1cb1bf83ae85\") " pod="openshift-nmstate/nmstate-handler-nmcxh" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.671903 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsjv5\" (UniqueName: \"kubernetes.io/projected/5df58dfa-f2af-439b-bd54-5253e8804e10-kube-api-access-qsjv5\") pod \"nmstate-console-plugin-7fbb5f6569-7zkrr\" (UID: \"5df58dfa-f2af-439b-bd54-5253e8804e10\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7zkrr" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.671966 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3d6ead77-e785-46e2-a9d8-1cb1bf83ae85-nmstate-lock\") pod \"nmstate-handler-nmcxh\" (UID: \"3d6ead77-e785-46e2-a9d8-1cb1bf83ae85\") " pod="openshift-nmstate/nmstate-handler-nmcxh" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.671984 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl6wv\" (UniqueName: \"kubernetes.io/projected/b69ee833-32fa-4d5f-b561-38b4e4a89a58-kube-api-access-sl6wv\") pod \"nmstate-webhook-5f6d4c5ccb-sflfc\" (UID: \"b69ee833-32fa-4d5f-b561-38b4e4a89a58\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-sflfc" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.672020 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b69ee833-32fa-4d5f-b561-38b4e4a89a58-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-sflfc\" (UID: \"b69ee833-32fa-4d5f-b561-38b4e4a89a58\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-sflfc" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.672035 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3d6ead77-e785-46e2-a9d8-1cb1bf83ae85-dbus-socket\") pod \"nmstate-handler-nmcxh\" (UID: \"3d6ead77-e785-46e2-a9d8-1cb1bf83ae85\") " pod="openshift-nmstate/nmstate-handler-nmcxh" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.672062 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5df58dfa-f2af-439b-bd54-5253e8804e10-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-7zkrr\" (UID: \"5df58dfa-f2af-439b-bd54-5253e8804e10\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7zkrr" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.672062 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3d6ead77-e785-46e2-a9d8-1cb1bf83ae85-nmstate-lock\") pod \"nmstate-handler-nmcxh\" (UID: \"3d6ead77-e785-46e2-a9d8-1cb1bf83ae85\") " pod="openshift-nmstate/nmstate-handler-nmcxh" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.672273 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3d6ead77-e785-46e2-a9d8-1cb1bf83ae85-dbus-socket\") pod \"nmstate-handler-nmcxh\" (UID: \"3d6ead77-e785-46e2-a9d8-1cb1bf83ae85\") " pod="openshift-nmstate/nmstate-handler-nmcxh" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.674895 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b69ee833-32fa-4d5f-b561-38b4e4a89a58-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-sflfc\" (UID: \"b69ee833-32fa-4d5f-b561-38b4e4a89a58\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-sflfc" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.684086 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl6wv\" (UniqueName: \"kubernetes.io/projected/b69ee833-32fa-4d5f-b561-38b4e4a89a58-kube-api-access-sl6wv\") pod \"nmstate-webhook-5f6d4c5ccb-sflfc\" (UID: \"b69ee833-32fa-4d5f-b561-38b4e4a89a58\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-sflfc" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.685979 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv97h\" (UniqueName: \"kubernetes.io/projected/3d6ead77-e785-46e2-a9d8-1cb1bf83ae85-kube-api-access-sv97h\") pod \"nmstate-handler-nmcxh\" (UID: \"3d6ead77-e785-46e2-a9d8-1cb1bf83ae85\") " pod="openshift-nmstate/nmstate-handler-nmcxh" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.764586 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-wlc9g" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.772870 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsjv5\" (UniqueName: \"kubernetes.io/projected/5df58dfa-f2af-439b-bd54-5253e8804e10-kube-api-access-qsjv5\") pod \"nmstate-console-plugin-7fbb5f6569-7zkrr\" (UID: \"5df58dfa-f2af-439b-bd54-5253e8804e10\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7zkrr" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.772942 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5df58dfa-f2af-439b-bd54-5253e8804e10-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-7zkrr\" (UID: \"5df58dfa-f2af-439b-bd54-5253e8804e10\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7zkrr" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.773221 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5df58dfa-f2af-439b-bd54-5253e8804e10-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-7zkrr\" (UID: \"5df58dfa-f2af-439b-bd54-5253e8804e10\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7zkrr" Dec 05 10:36:50 crc kubenswrapper[4796]: E1205 10:36:50.773394 4796 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 05 10:36:50 crc kubenswrapper[4796]: E1205 10:36:50.773449 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5df58dfa-f2af-439b-bd54-5253e8804e10-plugin-serving-cert podName:5df58dfa-f2af-439b-bd54-5253e8804e10 nodeName:}" failed. No retries permitted until 2025-12-05 10:36:51.273432863 +0000 UTC m=+557.561538376 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/5df58dfa-f2af-439b-bd54-5253e8804e10-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-7zkrr" (UID: "5df58dfa-f2af-439b-bd54-5253e8804e10") : secret "plugin-serving-cert" not found Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.773591 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5df58dfa-f2af-439b-bd54-5253e8804e10-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-7zkrr\" (UID: \"5df58dfa-f2af-439b-bd54-5253e8804e10\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7zkrr" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.781899 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-sflfc" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.782731 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-b99d74ffb-7grx7"] Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.783297 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.796146 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-nmcxh" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.797655 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsjv5\" (UniqueName: \"kubernetes.io/projected/5df58dfa-f2af-439b-bd54-5253e8804e10-kube-api-access-qsjv5\") pod \"nmstate-console-plugin-7fbb5f6569-7zkrr\" (UID: \"5df58dfa-f2af-439b-bd54-5253e8804e10\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7zkrr" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.802494 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b99d74ffb-7grx7"] Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.874953 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/231a574d-29f2-40da-86e1-4bea4b47a09b-console-config\") pod \"console-b99d74ffb-7grx7\" (UID: \"231a574d-29f2-40da-86e1-4bea4b47a09b\") " pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.875102 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/231a574d-29f2-40da-86e1-4bea4b47a09b-service-ca\") pod \"console-b99d74ffb-7grx7\" (UID: \"231a574d-29f2-40da-86e1-4bea4b47a09b\") " pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.875122 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/231a574d-29f2-40da-86e1-4bea4b47a09b-trusted-ca-bundle\") pod \"console-b99d74ffb-7grx7\" (UID: \"231a574d-29f2-40da-86e1-4bea4b47a09b\") " pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.875175 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/231a574d-29f2-40da-86e1-4bea4b47a09b-console-oauth-config\") pod \"console-b99d74ffb-7grx7\" (UID: \"231a574d-29f2-40da-86e1-4bea4b47a09b\") " pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.875190 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/231a574d-29f2-40da-86e1-4bea4b47a09b-oauth-serving-cert\") pod \"console-b99d74ffb-7grx7\" (UID: \"231a574d-29f2-40da-86e1-4bea4b47a09b\") " pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.875215 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/231a574d-29f2-40da-86e1-4bea4b47a09b-console-serving-cert\") pod \"console-b99d74ffb-7grx7\" (UID: \"231a574d-29f2-40da-86e1-4bea4b47a09b\") " pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.875252 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5868x\" (UniqueName: \"kubernetes.io/projected/231a574d-29f2-40da-86e1-4bea4b47a09b-kube-api-access-5868x\") pod \"console-b99d74ffb-7grx7\" (UID: \"231a574d-29f2-40da-86e1-4bea4b47a09b\") " pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.960744 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-sflfc"] Dec 05 10:36:50 crc kubenswrapper[4796]: W1205 10:36:50.962582 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb69ee833_32fa_4d5f_b561_38b4e4a89a58.slice/crio-3873ba19fe16dc9aa6e22288dc4f810da1da745990d5fc6178492b18fea2a8a7 WatchSource:0}: Error finding container 3873ba19fe16dc9aa6e22288dc4f810da1da745990d5fc6178492b18fea2a8a7: Status 404 returned error can't find the container with id 3873ba19fe16dc9aa6e22288dc4f810da1da745990d5fc6178492b18fea2a8a7 Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.975733 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/231a574d-29f2-40da-86e1-4bea4b47a09b-console-oauth-config\") pod \"console-b99d74ffb-7grx7\" (UID: \"231a574d-29f2-40da-86e1-4bea4b47a09b\") " pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.975762 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/231a574d-29f2-40da-86e1-4bea4b47a09b-oauth-serving-cert\") pod \"console-b99d74ffb-7grx7\" (UID: \"231a574d-29f2-40da-86e1-4bea4b47a09b\") " pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.975795 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/231a574d-29f2-40da-86e1-4bea4b47a09b-console-serving-cert\") pod \"console-b99d74ffb-7grx7\" (UID: \"231a574d-29f2-40da-86e1-4bea4b47a09b\") " pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.975816 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5868x\" (UniqueName: \"kubernetes.io/projected/231a574d-29f2-40da-86e1-4bea4b47a09b-kube-api-access-5868x\") pod \"console-b99d74ffb-7grx7\" (UID: \"231a574d-29f2-40da-86e1-4bea4b47a09b\") " pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.976481 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/231a574d-29f2-40da-86e1-4bea4b47a09b-console-config\") pod \"console-b99d74ffb-7grx7\" (UID: \"231a574d-29f2-40da-86e1-4bea4b47a09b\") " pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.976511 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/231a574d-29f2-40da-86e1-4bea4b47a09b-service-ca\") pod \"console-b99d74ffb-7grx7\" (UID: \"231a574d-29f2-40da-86e1-4bea4b47a09b\") " pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.976528 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/231a574d-29f2-40da-86e1-4bea4b47a09b-trusted-ca-bundle\") pod \"console-b99d74ffb-7grx7\" (UID: \"231a574d-29f2-40da-86e1-4bea4b47a09b\") " pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.976552 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/231a574d-29f2-40da-86e1-4bea4b47a09b-oauth-serving-cert\") pod \"console-b99d74ffb-7grx7\" (UID: \"231a574d-29f2-40da-86e1-4bea4b47a09b\") " pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.977169 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/231a574d-29f2-40da-86e1-4bea4b47a09b-console-config\") pod \"console-b99d74ffb-7grx7\" (UID: \"231a574d-29f2-40da-86e1-4bea4b47a09b\") " pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.977418 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/231a574d-29f2-40da-86e1-4bea4b47a09b-service-ca\") pod \"console-b99d74ffb-7grx7\" (UID: \"231a574d-29f2-40da-86e1-4bea4b47a09b\") " pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.977890 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/231a574d-29f2-40da-86e1-4bea4b47a09b-trusted-ca-bundle\") pod \"console-b99d74ffb-7grx7\" (UID: \"231a574d-29f2-40da-86e1-4bea4b47a09b\") " pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.979402 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/231a574d-29f2-40da-86e1-4bea4b47a09b-console-oauth-config\") pod \"console-b99d74ffb-7grx7\" (UID: \"231a574d-29f2-40da-86e1-4bea4b47a09b\") " pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.979457 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/231a574d-29f2-40da-86e1-4bea4b47a09b-console-serving-cert\") pod \"console-b99d74ffb-7grx7\" (UID: \"231a574d-29f2-40da-86e1-4bea4b47a09b\") " pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:36:50 crc kubenswrapper[4796]: I1205 10:36:50.988920 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5868x\" (UniqueName: \"kubernetes.io/projected/231a574d-29f2-40da-86e1-4bea4b47a09b-kube-api-access-5868x\") pod \"console-b99d74ffb-7grx7\" (UID: \"231a574d-29f2-40da-86e1-4bea4b47a09b\") " pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:36:51 crc kubenswrapper[4796]: I1205 10:36:51.097008 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:36:51 crc kubenswrapper[4796]: I1205 10:36:51.122214 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-wlc9g"] Dec 05 10:36:51 crc kubenswrapper[4796]: I1205 10:36:51.278692 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5df58dfa-f2af-439b-bd54-5253e8804e10-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-7zkrr\" (UID: \"5df58dfa-f2af-439b-bd54-5253e8804e10\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7zkrr" Dec 05 10:36:51 crc kubenswrapper[4796]: I1205 10:36:51.281631 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5df58dfa-f2af-439b-bd54-5253e8804e10-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-7zkrr\" (UID: \"5df58dfa-f2af-439b-bd54-5253e8804e10\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7zkrr" Dec 05 10:36:51 crc kubenswrapper[4796]: I1205 10:36:51.446266 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b99d74ffb-7grx7"] Dec 05 10:36:51 crc kubenswrapper[4796]: W1205 10:36:51.448699 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod231a574d_29f2_40da_86e1_4bea4b47a09b.slice/crio-6c5ab401ea4bcb55e550d92cfcf52fc17788004281186d642506e9765848595f WatchSource:0}: Error finding container 6c5ab401ea4bcb55e550d92cfcf52fc17788004281186d642506e9765848595f: Status 404 returned error can't find the container with id 6c5ab401ea4bcb55e550d92cfcf52fc17788004281186d642506e9765848595f Dec 05 10:36:51 crc kubenswrapper[4796]: I1205 10:36:51.546616 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7zkrr" Dec 05 10:36:51 crc kubenswrapper[4796]: I1205 10:36:51.731706 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b99d74ffb-7grx7" event={"ID":"231a574d-29f2-40da-86e1-4bea4b47a09b","Type":"ContainerStarted","Data":"051bc33e6722664e02ff735f4999a83d471d85f09a19277757b19dd9c85153ee"} Dec 05 10:36:51 crc kubenswrapper[4796]: I1205 10:36:51.731898 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b99d74ffb-7grx7" event={"ID":"231a574d-29f2-40da-86e1-4bea4b47a09b","Type":"ContainerStarted","Data":"6c5ab401ea4bcb55e550d92cfcf52fc17788004281186d642506e9765848595f"} Dec 05 10:36:51 crc kubenswrapper[4796]: I1205 10:36:51.732434 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-nmcxh" event={"ID":"3d6ead77-e785-46e2-a9d8-1cb1bf83ae85","Type":"ContainerStarted","Data":"2e9597f1ad14173d824a266b4d63e765e748408fb6b070da8edb26a95c86bffa"} Dec 05 10:36:51 crc kubenswrapper[4796]: I1205 10:36:51.733099 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-sflfc" event={"ID":"b69ee833-32fa-4d5f-b561-38b4e4a89a58","Type":"ContainerStarted","Data":"3873ba19fe16dc9aa6e22288dc4f810da1da745990d5fc6178492b18fea2a8a7"} Dec 05 10:36:51 crc kubenswrapper[4796]: I1205 10:36:51.733699 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-wlc9g" event={"ID":"6dbc6bec-7e30-4b6e-9db8-b0ea8f210baa","Type":"ContainerStarted","Data":"b2aa78a39f6bd0ddf137b6a3e6774f9ef5164f4e2cc8f8118ad6994b88c76b9d"} Dec 05 10:36:51 crc kubenswrapper[4796]: I1205 10:36:51.743613 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-b99d74ffb-7grx7" podStartSLOduration=1.743597764 podStartE2EDuration="1.743597764s" podCreationTimestamp="2025-12-05 10:36:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:36:51.741959891 +0000 UTC m=+558.030065404" watchObservedRunningTime="2025-12-05 10:36:51.743597764 +0000 UTC m=+558.031703277" Dec 05 10:36:51 crc kubenswrapper[4796]: I1205 10:36:51.891397 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7zkrr"] Dec 05 10:36:51 crc kubenswrapper[4796]: W1205 10:36:51.894486 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5df58dfa_f2af_439b_bd54_5253e8804e10.slice/crio-faa11682181c800bcbb38901e7c32925add82fec8ea0b3caa19c5005e370ea84 WatchSource:0}: Error finding container faa11682181c800bcbb38901e7c32925add82fec8ea0b3caa19c5005e370ea84: Status 404 returned error can't find the container with id faa11682181c800bcbb38901e7c32925add82fec8ea0b3caa19c5005e370ea84 Dec 05 10:36:52 crc kubenswrapper[4796]: I1205 10:36:52.741573 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7zkrr" event={"ID":"5df58dfa-f2af-439b-bd54-5253e8804e10","Type":"ContainerStarted","Data":"faa11682181c800bcbb38901e7c32925add82fec8ea0b3caa19c5005e370ea84"} Dec 05 10:36:53 crc kubenswrapper[4796]: I1205 10:36:53.747853 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-wlc9g" event={"ID":"6dbc6bec-7e30-4b6e-9db8-b0ea8f210baa","Type":"ContainerStarted","Data":"49877cc6d2043b4e45ffa927ee06c0af351b09068d6cb9509f0ab2f45f9752e2"} Dec 05 10:36:53 crc kubenswrapper[4796]: I1205 10:36:53.748730 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-sflfc" event={"ID":"b69ee833-32fa-4d5f-b561-38b4e4a89a58","Type":"ContainerStarted","Data":"2140bf4fa625ba297bd96110ff18f24e34c21d78fb0f94af1e36c8802d052ebd"} Dec 05 10:36:53 crc kubenswrapper[4796]: I1205 10:36:53.748827 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-sflfc" Dec 05 10:36:53 crc kubenswrapper[4796]: I1205 10:36:53.750372 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-nmcxh" event={"ID":"3d6ead77-e785-46e2-a9d8-1cb1bf83ae85","Type":"ContainerStarted","Data":"50fc7d5e59b552c2bbda801c0d21a05cbc5f960ca187566bcc190cfc61dd5e1c"} Dec 05 10:36:53 crc kubenswrapper[4796]: I1205 10:36:53.750498 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-nmcxh" Dec 05 10:36:53 crc kubenswrapper[4796]: I1205 10:36:53.761189 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-sflfc" podStartSLOduration=1.715639947 podStartE2EDuration="3.761174453s" podCreationTimestamp="2025-12-05 10:36:50 +0000 UTC" firstStartedPulling="2025-12-05 10:36:50.964288501 +0000 UTC m=+557.252394014" lastFinishedPulling="2025-12-05 10:36:53.009823007 +0000 UTC m=+559.297928520" observedRunningTime="2025-12-05 10:36:53.759056717 +0000 UTC m=+560.047162231" watchObservedRunningTime="2025-12-05 10:36:53.761174453 +0000 UTC m=+560.049279966" Dec 05 10:36:53 crc kubenswrapper[4796]: I1205 10:36:53.777082 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-nmcxh" podStartSLOduration=1.609244342 podStartE2EDuration="3.777067364s" podCreationTimestamp="2025-12-05 10:36:50 +0000 UTC" firstStartedPulling="2025-12-05 10:36:50.843438523 +0000 UTC m=+557.131544036" lastFinishedPulling="2025-12-05 10:36:53.011261545 +0000 UTC m=+559.299367058" observedRunningTime="2025-12-05 10:36:53.774032322 +0000 UTC m=+560.062137835" watchObservedRunningTime="2025-12-05 10:36:53.777067364 +0000 UTC m=+560.065172876" Dec 05 10:36:54 crc kubenswrapper[4796]: I1205 10:36:54.757710 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7zkrr" event={"ID":"5df58dfa-f2af-439b-bd54-5253e8804e10","Type":"ContainerStarted","Data":"87ac564f580f5c14792c96b6eae257cd664c2943335f427f519a6ace85fb7c6a"} Dec 05 10:36:54 crc kubenswrapper[4796]: I1205 10:36:54.769202 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-7zkrr" podStartSLOduration=2.950002916 podStartE2EDuration="4.769186081s" podCreationTimestamp="2025-12-05 10:36:50 +0000 UTC" firstStartedPulling="2025-12-05 10:36:51.89619 +0000 UTC m=+558.184295514" lastFinishedPulling="2025-12-05 10:36:53.715373166 +0000 UTC m=+560.003478679" observedRunningTime="2025-12-05 10:36:54.766848282 +0000 UTC m=+561.054953794" watchObservedRunningTime="2025-12-05 10:36:54.769186081 +0000 UTC m=+561.057291594" Dec 05 10:36:57 crc kubenswrapper[4796]: I1205 10:36:57.773288 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-wlc9g" event={"ID":"6dbc6bec-7e30-4b6e-9db8-b0ea8f210baa","Type":"ContainerStarted","Data":"0d5dc163c2fa260bfcef866d15440544fcda84651755e752b091f8c589ece0c2"} Dec 05 10:36:57 crc kubenswrapper[4796]: I1205 10:36:57.786583 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-wlc9g" podStartSLOduration=1.452264548 podStartE2EDuration="7.786569306s" podCreationTimestamp="2025-12-05 10:36:50 +0000 UTC" firstStartedPulling="2025-12-05 10:36:51.137826198 +0000 UTC m=+557.425931711" lastFinishedPulling="2025-12-05 10:36:57.472130957 +0000 UTC m=+563.760236469" observedRunningTime="2025-12-05 10:36:57.785480557 +0000 UTC m=+564.073586070" watchObservedRunningTime="2025-12-05 10:36:57.786569306 +0000 UTC m=+564.074674809" Dec 05 10:37:00 crc kubenswrapper[4796]: I1205 10:37:00.813292 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-nmcxh" Dec 05 10:37:01 crc kubenswrapper[4796]: I1205 10:37:01.097722 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:37:01 crc kubenswrapper[4796]: I1205 10:37:01.097782 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:37:01 crc kubenswrapper[4796]: I1205 10:37:01.102423 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:37:01 crc kubenswrapper[4796]: I1205 10:37:01.795210 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-b99d74ffb-7grx7" Dec 05 10:37:01 crc kubenswrapper[4796]: I1205 10:37:01.826269 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-t4hjr"] Dec 05 10:37:05 crc kubenswrapper[4796]: I1205 10:37:05.177451 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:37:05 crc kubenswrapper[4796]: I1205 10:37:05.177765 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:37:05 crc kubenswrapper[4796]: I1205 10:37:05.177804 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 10:37:05 crc kubenswrapper[4796]: I1205 10:37:05.178215 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5f20b7eecdefbb86282fce61808a4c3f117c21f24b1bad20aefe872aef8a2e8b"} pod="openshift-machine-config-operator/machine-config-daemon-9pllw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 10:37:05 crc kubenswrapper[4796]: I1205 10:37:05.178258 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" containerID="cri-o://5f20b7eecdefbb86282fce61808a4c3f117c21f24b1bad20aefe872aef8a2e8b" gracePeriod=600 Dec 05 10:37:05 crc kubenswrapper[4796]: I1205 10:37:05.808125 4796 generic.go:334] "Generic (PLEG): container finished" podID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerID="5f20b7eecdefbb86282fce61808a4c3f117c21f24b1bad20aefe872aef8a2e8b" exitCode=0 Dec 05 10:37:05 crc kubenswrapper[4796]: I1205 10:37:05.808271 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerDied","Data":"5f20b7eecdefbb86282fce61808a4c3f117c21f24b1bad20aefe872aef8a2e8b"} Dec 05 10:37:05 crc kubenswrapper[4796]: I1205 10:37:05.808464 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerStarted","Data":"11955f72d8cab8d1dbc9ce29d0048cc5cd67ff89aba57e0b67af6a83dc91a6d7"} Dec 05 10:37:05 crc kubenswrapper[4796]: I1205 10:37:05.808481 4796 scope.go:117] "RemoveContainer" containerID="b237acecfc6f07a912dd5d391cf00caaa99fd4cca591ed146b85426c5cf97464" Dec 05 10:37:10 crc kubenswrapper[4796]: I1205 10:37:10.787536 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-sflfc" Dec 05 10:37:20 crc kubenswrapper[4796]: I1205 10:37:20.923102 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj"] Dec 05 10:37:20 crc kubenswrapper[4796]: I1205 10:37:20.924360 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj" Dec 05 10:37:20 crc kubenswrapper[4796]: I1205 10:37:20.925825 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 05 10:37:20 crc kubenswrapper[4796]: I1205 10:37:20.929540 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj"] Dec 05 10:37:20 crc kubenswrapper[4796]: I1205 10:37:20.992046 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1aa954fc-98a6-42f8-b3c5-629859212fab-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj\" (UID: \"1aa954fc-98a6-42f8-b3c5-629859212fab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj" Dec 05 10:37:20 crc kubenswrapper[4796]: I1205 10:37:20.992089 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bwlz\" (UniqueName: \"kubernetes.io/projected/1aa954fc-98a6-42f8-b3c5-629859212fab-kube-api-access-8bwlz\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj\" (UID: \"1aa954fc-98a6-42f8-b3c5-629859212fab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj" Dec 05 10:37:20 crc kubenswrapper[4796]: I1205 10:37:20.992111 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1aa954fc-98a6-42f8-b3c5-629859212fab-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj\" (UID: \"1aa954fc-98a6-42f8-b3c5-629859212fab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj" Dec 05 10:37:21 crc kubenswrapper[4796]: I1205 10:37:21.093625 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1aa954fc-98a6-42f8-b3c5-629859212fab-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj\" (UID: \"1aa954fc-98a6-42f8-b3c5-629859212fab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj" Dec 05 10:37:21 crc kubenswrapper[4796]: I1205 10:37:21.093707 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bwlz\" (UniqueName: \"kubernetes.io/projected/1aa954fc-98a6-42f8-b3c5-629859212fab-kube-api-access-8bwlz\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj\" (UID: \"1aa954fc-98a6-42f8-b3c5-629859212fab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj" Dec 05 10:37:21 crc kubenswrapper[4796]: I1205 10:37:21.093740 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1aa954fc-98a6-42f8-b3c5-629859212fab-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj\" (UID: \"1aa954fc-98a6-42f8-b3c5-629859212fab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj" Dec 05 10:37:21 crc kubenswrapper[4796]: I1205 10:37:21.094075 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1aa954fc-98a6-42f8-b3c5-629859212fab-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj\" (UID: \"1aa954fc-98a6-42f8-b3c5-629859212fab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj" Dec 05 10:37:21 crc kubenswrapper[4796]: I1205 10:37:21.094141 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1aa954fc-98a6-42f8-b3c5-629859212fab-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj\" (UID: \"1aa954fc-98a6-42f8-b3c5-629859212fab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj" Dec 05 10:37:21 crc kubenswrapper[4796]: I1205 10:37:21.108105 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bwlz\" (UniqueName: \"kubernetes.io/projected/1aa954fc-98a6-42f8-b3c5-629859212fab-kube-api-access-8bwlz\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj\" (UID: \"1aa954fc-98a6-42f8-b3c5-629859212fab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj" Dec 05 10:37:21 crc kubenswrapper[4796]: I1205 10:37:21.235151 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj" Dec 05 10:37:21 crc kubenswrapper[4796]: I1205 10:37:21.557308 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj"] Dec 05 10:37:21 crc kubenswrapper[4796]: I1205 10:37:21.886509 4796 generic.go:334] "Generic (PLEG): container finished" podID="1aa954fc-98a6-42f8-b3c5-629859212fab" containerID="c72632880e68fda9eba2507b7cafb617b5b931d4836d52693f6593a8123f48ed" exitCode=0 Dec 05 10:37:21 crc kubenswrapper[4796]: I1205 10:37:21.886542 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj" event={"ID":"1aa954fc-98a6-42f8-b3c5-629859212fab","Type":"ContainerDied","Data":"c72632880e68fda9eba2507b7cafb617b5b931d4836d52693f6593a8123f48ed"} Dec 05 10:37:21 crc kubenswrapper[4796]: I1205 10:37:21.886562 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj" event={"ID":"1aa954fc-98a6-42f8-b3c5-629859212fab","Type":"ContainerStarted","Data":"de4cafca29ac10c4ad4e1f77a027e6c937c8e87d0b1f442c79fb4718057deaa1"} Dec 05 10:37:23 crc kubenswrapper[4796]: I1205 10:37:23.894921 4796 generic.go:334] "Generic (PLEG): container finished" podID="1aa954fc-98a6-42f8-b3c5-629859212fab" containerID="a626f5fb6ff056c95a7c6aa727e6e9740afd2ccfda15e4f7b36a6f89b17eb750" exitCode=0 Dec 05 10:37:23 crc kubenswrapper[4796]: I1205 10:37:23.894955 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj" event={"ID":"1aa954fc-98a6-42f8-b3c5-629859212fab","Type":"ContainerDied","Data":"a626f5fb6ff056c95a7c6aa727e6e9740afd2ccfda15e4f7b36a6f89b17eb750"} Dec 05 10:37:24 crc kubenswrapper[4796]: I1205 10:37:24.900271 4796 generic.go:334] "Generic (PLEG): container finished" podID="1aa954fc-98a6-42f8-b3c5-629859212fab" containerID="c16a9a1acf061f3a08e0372f925264141fa4d0917e1cc6381bb71e5b50ba465e" exitCode=0 Dec 05 10:37:24 crc kubenswrapper[4796]: I1205 10:37:24.900306 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj" event={"ID":"1aa954fc-98a6-42f8-b3c5-629859212fab","Type":"ContainerDied","Data":"c16a9a1acf061f3a08e0372f925264141fa4d0917e1cc6381bb71e5b50ba465e"} Dec 05 10:37:26 crc kubenswrapper[4796]: I1205 10:37:26.063245 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj" Dec 05 10:37:26 crc kubenswrapper[4796]: I1205 10:37:26.240424 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bwlz\" (UniqueName: \"kubernetes.io/projected/1aa954fc-98a6-42f8-b3c5-629859212fab-kube-api-access-8bwlz\") pod \"1aa954fc-98a6-42f8-b3c5-629859212fab\" (UID: \"1aa954fc-98a6-42f8-b3c5-629859212fab\") " Dec 05 10:37:26 crc kubenswrapper[4796]: I1205 10:37:26.240466 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1aa954fc-98a6-42f8-b3c5-629859212fab-bundle\") pod \"1aa954fc-98a6-42f8-b3c5-629859212fab\" (UID: \"1aa954fc-98a6-42f8-b3c5-629859212fab\") " Dec 05 10:37:26 crc kubenswrapper[4796]: I1205 10:37:26.240481 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1aa954fc-98a6-42f8-b3c5-629859212fab-util\") pod \"1aa954fc-98a6-42f8-b3c5-629859212fab\" (UID: \"1aa954fc-98a6-42f8-b3c5-629859212fab\") " Dec 05 10:37:26 crc kubenswrapper[4796]: I1205 10:37:26.241477 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1aa954fc-98a6-42f8-b3c5-629859212fab-bundle" (OuterVolumeSpecName: "bundle") pod "1aa954fc-98a6-42f8-b3c5-629859212fab" (UID: "1aa954fc-98a6-42f8-b3c5-629859212fab"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:37:26 crc kubenswrapper[4796]: I1205 10:37:26.244379 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aa954fc-98a6-42f8-b3c5-629859212fab-kube-api-access-8bwlz" (OuterVolumeSpecName: "kube-api-access-8bwlz") pod "1aa954fc-98a6-42f8-b3c5-629859212fab" (UID: "1aa954fc-98a6-42f8-b3c5-629859212fab"). InnerVolumeSpecName "kube-api-access-8bwlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:37:26 crc kubenswrapper[4796]: I1205 10:37:26.250013 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1aa954fc-98a6-42f8-b3c5-629859212fab-util" (OuterVolumeSpecName: "util") pod "1aa954fc-98a6-42f8-b3c5-629859212fab" (UID: "1aa954fc-98a6-42f8-b3c5-629859212fab"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:37:26 crc kubenswrapper[4796]: I1205 10:37:26.342072 4796 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1aa954fc-98a6-42f8-b3c5-629859212fab-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:37:26 crc kubenswrapper[4796]: I1205 10:37:26.342109 4796 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1aa954fc-98a6-42f8-b3c5-629859212fab-util\") on node \"crc\" DevicePath \"\"" Dec 05 10:37:26 crc kubenswrapper[4796]: I1205 10:37:26.342118 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bwlz\" (UniqueName: \"kubernetes.io/projected/1aa954fc-98a6-42f8-b3c5-629859212fab-kube-api-access-8bwlz\") on node \"crc\" DevicePath \"\"" Dec 05 10:37:26 crc kubenswrapper[4796]: I1205 10:37:26.851840 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-t4hjr" podUID="8e207003-e5a7-4cd4-a6e4-748fd8ece44b" containerName="console" containerID="cri-o://b9acad5ce5630fcb8b41f0119683ba36d9403f3efee1cd80b643f5b4ed412598" gracePeriod=15 Dec 05 10:37:26 crc kubenswrapper[4796]: I1205 10:37:26.909250 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj" event={"ID":"1aa954fc-98a6-42f8-b3c5-629859212fab","Type":"ContainerDied","Data":"de4cafca29ac10c4ad4e1f77a027e6c937c8e87d0b1f442c79fb4718057deaa1"} Dec 05 10:37:26 crc kubenswrapper[4796]: I1205 10:37:26.909286 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de4cafca29ac10c4ad4e1f77a027e6c937c8e87d0b1f442c79fb4718057deaa1" Dec 05 10:37:26 crc kubenswrapper[4796]: I1205 10:37:26.909302 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj" Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.128346 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-t4hjr_8e207003-e5a7-4cd4-a6e4-748fd8ece44b/console/0.log" Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.128531 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.251501 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-trusted-ca-bundle\") pod \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.251549 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-service-ca\") pod \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.251603 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-oauth-serving-cert\") pod \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.251656 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-console-oauth-config\") pod \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.251676 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-console-config\") pod \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.251759 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69qvf\" (UniqueName: \"kubernetes.io/projected/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-kube-api-access-69qvf\") pod \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.251780 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-console-serving-cert\") pod \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\" (UID: \"8e207003-e5a7-4cd4-a6e4-748fd8ece44b\") " Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.252240 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8e207003-e5a7-4cd4-a6e4-748fd8ece44b" (UID: "8e207003-e5a7-4cd4-a6e4-748fd8ece44b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.252264 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-console-config" (OuterVolumeSpecName: "console-config") pod "8e207003-e5a7-4cd4-a6e4-748fd8ece44b" (UID: "8e207003-e5a7-4cd4-a6e4-748fd8ece44b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.252248 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-service-ca" (OuterVolumeSpecName: "service-ca") pod "8e207003-e5a7-4cd4-a6e4-748fd8ece44b" (UID: "8e207003-e5a7-4cd4-a6e4-748fd8ece44b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.252288 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "8e207003-e5a7-4cd4-a6e4-748fd8ece44b" (UID: "8e207003-e5a7-4cd4-a6e4-748fd8ece44b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.252501 4796 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.252520 4796 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-service-ca\") on node \"crc\" DevicePath \"\"" Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.252528 4796 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.252536 4796 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-console-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.256489 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-kube-api-access-69qvf" (OuterVolumeSpecName: "kube-api-access-69qvf") pod "8e207003-e5a7-4cd4-a6e4-748fd8ece44b" (UID: "8e207003-e5a7-4cd4-a6e4-748fd8ece44b"). InnerVolumeSpecName "kube-api-access-69qvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.257486 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "8e207003-e5a7-4cd4-a6e4-748fd8ece44b" (UID: "8e207003-e5a7-4cd4-a6e4-748fd8ece44b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.261197 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "8e207003-e5a7-4cd4-a6e4-748fd8ece44b" (UID: "8e207003-e5a7-4cd4-a6e4-748fd8ece44b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.352869 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69qvf\" (UniqueName: \"kubernetes.io/projected/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-kube-api-access-69qvf\") on node \"crc\" DevicePath \"\"" Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.352896 4796 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.352905 4796 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8e207003-e5a7-4cd4-a6e4-748fd8ece44b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.914733 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-t4hjr_8e207003-e5a7-4cd4-a6e4-748fd8ece44b/console/0.log" Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.914793 4796 generic.go:334] "Generic (PLEG): container finished" podID="8e207003-e5a7-4cd4-a6e4-748fd8ece44b" containerID="b9acad5ce5630fcb8b41f0119683ba36d9403f3efee1cd80b643f5b4ed412598" exitCode=2 Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.914826 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t4hjr" event={"ID":"8e207003-e5a7-4cd4-a6e4-748fd8ece44b","Type":"ContainerDied","Data":"b9acad5ce5630fcb8b41f0119683ba36d9403f3efee1cd80b643f5b4ed412598"} Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.914865 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-t4hjr" Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.914881 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-t4hjr" event={"ID":"8e207003-e5a7-4cd4-a6e4-748fd8ece44b","Type":"ContainerDied","Data":"2730354f504be8b4296aad98febd53bf2eb67507de0ddd2ec405ab0bca4d92c9"} Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.914903 4796 scope.go:117] "RemoveContainer" containerID="b9acad5ce5630fcb8b41f0119683ba36d9403f3efee1cd80b643f5b4ed412598" Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.926947 4796 scope.go:117] "RemoveContainer" containerID="b9acad5ce5630fcb8b41f0119683ba36d9403f3efee1cd80b643f5b4ed412598" Dec 05 10:37:27 crc kubenswrapper[4796]: E1205 10:37:27.927295 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9acad5ce5630fcb8b41f0119683ba36d9403f3efee1cd80b643f5b4ed412598\": container with ID starting with b9acad5ce5630fcb8b41f0119683ba36d9403f3efee1cd80b643f5b4ed412598 not found: ID does not exist" containerID="b9acad5ce5630fcb8b41f0119683ba36d9403f3efee1cd80b643f5b4ed412598" Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.927336 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9acad5ce5630fcb8b41f0119683ba36d9403f3efee1cd80b643f5b4ed412598"} err="failed to get container status \"b9acad5ce5630fcb8b41f0119683ba36d9403f3efee1cd80b643f5b4ed412598\": rpc error: code = NotFound desc = could not find container \"b9acad5ce5630fcb8b41f0119683ba36d9403f3efee1cd80b643f5b4ed412598\": container with ID starting with b9acad5ce5630fcb8b41f0119683ba36d9403f3efee1cd80b643f5b4ed412598 not found: ID does not exist" Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.939748 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-t4hjr"] Dec 05 10:37:27 crc kubenswrapper[4796]: I1205 10:37:27.942971 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-t4hjr"] Dec 05 10:37:28 crc kubenswrapper[4796]: I1205 10:37:28.035879 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e207003-e5a7-4cd4-a6e4-748fd8ece44b" path="/var/lib/kubelet/pods/8e207003-e5a7-4cd4-a6e4-748fd8ece44b/volumes" Dec 05 10:37:34 crc kubenswrapper[4796]: I1205 10:37:34.984849 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5848764dff-zxvnb"] Dec 05 10:37:34 crc kubenswrapper[4796]: E1205 10:37:34.985172 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aa954fc-98a6-42f8-b3c5-629859212fab" containerName="extract" Dec 05 10:37:34 crc kubenswrapper[4796]: I1205 10:37:34.985182 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aa954fc-98a6-42f8-b3c5-629859212fab" containerName="extract" Dec 05 10:37:34 crc kubenswrapper[4796]: E1205 10:37:34.985192 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e207003-e5a7-4cd4-a6e4-748fd8ece44b" containerName="console" Dec 05 10:37:34 crc kubenswrapper[4796]: I1205 10:37:34.985198 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e207003-e5a7-4cd4-a6e4-748fd8ece44b" containerName="console" Dec 05 10:37:34 crc kubenswrapper[4796]: E1205 10:37:34.985215 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aa954fc-98a6-42f8-b3c5-629859212fab" containerName="util" Dec 05 10:37:34 crc kubenswrapper[4796]: I1205 10:37:34.985220 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aa954fc-98a6-42f8-b3c5-629859212fab" containerName="util" Dec 05 10:37:34 crc kubenswrapper[4796]: E1205 10:37:34.985228 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aa954fc-98a6-42f8-b3c5-629859212fab" containerName="pull" Dec 05 10:37:34 crc kubenswrapper[4796]: I1205 10:37:34.985233 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aa954fc-98a6-42f8-b3c5-629859212fab" containerName="pull" Dec 05 10:37:34 crc kubenswrapper[4796]: I1205 10:37:34.985314 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e207003-e5a7-4cd4-a6e4-748fd8ece44b" containerName="console" Dec 05 10:37:34 crc kubenswrapper[4796]: I1205 10:37:34.985325 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1aa954fc-98a6-42f8-b3c5-629859212fab" containerName="extract" Dec 05 10:37:34 crc kubenswrapper[4796]: I1205 10:37:34.985628 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5848764dff-zxvnb" Dec 05 10:37:34 crc kubenswrapper[4796]: I1205 10:37:34.986848 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-6wmtq" Dec 05 10:37:34 crc kubenswrapper[4796]: I1205 10:37:34.987098 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 05 10:37:34 crc kubenswrapper[4796]: I1205 10:37:34.988183 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 05 10:37:34 crc kubenswrapper[4796]: I1205 10:37:34.988476 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 05 10:37:34 crc kubenswrapper[4796]: I1205 10:37:34.988821 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 05 10:37:34 crc kubenswrapper[4796]: I1205 10:37:34.996372 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5848764dff-zxvnb"] Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.032874 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45ktw\" (UniqueName: \"kubernetes.io/projected/0a2ba1fb-8eb8-497a-8f31-931aca49243e-kube-api-access-45ktw\") pod \"metallb-operator-controller-manager-5848764dff-zxvnb\" (UID: \"0a2ba1fb-8eb8-497a-8f31-931aca49243e\") " pod="metallb-system/metallb-operator-controller-manager-5848764dff-zxvnb" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.032966 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a2ba1fb-8eb8-497a-8f31-931aca49243e-apiservice-cert\") pod \"metallb-operator-controller-manager-5848764dff-zxvnb\" (UID: \"0a2ba1fb-8eb8-497a-8f31-931aca49243e\") " pod="metallb-system/metallb-operator-controller-manager-5848764dff-zxvnb" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.032995 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a2ba1fb-8eb8-497a-8f31-931aca49243e-webhook-cert\") pod \"metallb-operator-controller-manager-5848764dff-zxvnb\" (UID: \"0a2ba1fb-8eb8-497a-8f31-931aca49243e\") " pod="metallb-system/metallb-operator-controller-manager-5848764dff-zxvnb" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.133877 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a2ba1fb-8eb8-497a-8f31-931aca49243e-apiservice-cert\") pod \"metallb-operator-controller-manager-5848764dff-zxvnb\" (UID: \"0a2ba1fb-8eb8-497a-8f31-931aca49243e\") " pod="metallb-system/metallb-operator-controller-manager-5848764dff-zxvnb" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.133928 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a2ba1fb-8eb8-497a-8f31-931aca49243e-webhook-cert\") pod \"metallb-operator-controller-manager-5848764dff-zxvnb\" (UID: \"0a2ba1fb-8eb8-497a-8f31-931aca49243e\") " pod="metallb-system/metallb-operator-controller-manager-5848764dff-zxvnb" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.133961 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45ktw\" (UniqueName: \"kubernetes.io/projected/0a2ba1fb-8eb8-497a-8f31-931aca49243e-kube-api-access-45ktw\") pod \"metallb-operator-controller-manager-5848764dff-zxvnb\" (UID: \"0a2ba1fb-8eb8-497a-8f31-931aca49243e\") " pod="metallb-system/metallb-operator-controller-manager-5848764dff-zxvnb" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.140271 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a2ba1fb-8eb8-497a-8f31-931aca49243e-webhook-cert\") pod \"metallb-operator-controller-manager-5848764dff-zxvnb\" (UID: \"0a2ba1fb-8eb8-497a-8f31-931aca49243e\") " pod="metallb-system/metallb-operator-controller-manager-5848764dff-zxvnb" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.140279 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a2ba1fb-8eb8-497a-8f31-931aca49243e-apiservice-cert\") pod \"metallb-operator-controller-manager-5848764dff-zxvnb\" (UID: \"0a2ba1fb-8eb8-497a-8f31-931aca49243e\") " pod="metallb-system/metallb-operator-controller-manager-5848764dff-zxvnb" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.152797 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45ktw\" (UniqueName: \"kubernetes.io/projected/0a2ba1fb-8eb8-497a-8f31-931aca49243e-kube-api-access-45ktw\") pod \"metallb-operator-controller-manager-5848764dff-zxvnb\" (UID: \"0a2ba1fb-8eb8-497a-8f31-931aca49243e\") " pod="metallb-system/metallb-operator-controller-manager-5848764dff-zxvnb" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.296820 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5848764dff-zxvnb" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.305317 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-695488f64b-2qxth"] Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.305918 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-695488f64b-2qxth" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.306910 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-ngdgm" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.307295 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.307416 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.319117 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-695488f64b-2qxth"] Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.335784 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/249e98a9-0d33-4f5c-8102-03e46de4d20e-apiservice-cert\") pod \"metallb-operator-webhook-server-695488f64b-2qxth\" (UID: \"249e98a9-0d33-4f5c-8102-03e46de4d20e\") " pod="metallb-system/metallb-operator-webhook-server-695488f64b-2qxth" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.335845 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/249e98a9-0d33-4f5c-8102-03e46de4d20e-webhook-cert\") pod \"metallb-operator-webhook-server-695488f64b-2qxth\" (UID: \"249e98a9-0d33-4f5c-8102-03e46de4d20e\") " pod="metallb-system/metallb-operator-webhook-server-695488f64b-2qxth" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.335886 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4t8x\" (UniqueName: \"kubernetes.io/projected/249e98a9-0d33-4f5c-8102-03e46de4d20e-kube-api-access-s4t8x\") pod \"metallb-operator-webhook-server-695488f64b-2qxth\" (UID: \"249e98a9-0d33-4f5c-8102-03e46de4d20e\") " pod="metallb-system/metallb-operator-webhook-server-695488f64b-2qxth" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.436605 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/249e98a9-0d33-4f5c-8102-03e46de4d20e-apiservice-cert\") pod \"metallb-operator-webhook-server-695488f64b-2qxth\" (UID: \"249e98a9-0d33-4f5c-8102-03e46de4d20e\") " pod="metallb-system/metallb-operator-webhook-server-695488f64b-2qxth" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.436662 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/249e98a9-0d33-4f5c-8102-03e46de4d20e-webhook-cert\") pod \"metallb-operator-webhook-server-695488f64b-2qxth\" (UID: \"249e98a9-0d33-4f5c-8102-03e46de4d20e\") " pod="metallb-system/metallb-operator-webhook-server-695488f64b-2qxth" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.436700 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4t8x\" (UniqueName: \"kubernetes.io/projected/249e98a9-0d33-4f5c-8102-03e46de4d20e-kube-api-access-s4t8x\") pod \"metallb-operator-webhook-server-695488f64b-2qxth\" (UID: \"249e98a9-0d33-4f5c-8102-03e46de4d20e\") " pod="metallb-system/metallb-operator-webhook-server-695488f64b-2qxth" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.439406 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/249e98a9-0d33-4f5c-8102-03e46de4d20e-apiservice-cert\") pod \"metallb-operator-webhook-server-695488f64b-2qxth\" (UID: \"249e98a9-0d33-4f5c-8102-03e46de4d20e\") " pod="metallb-system/metallb-operator-webhook-server-695488f64b-2qxth" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.439992 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/249e98a9-0d33-4f5c-8102-03e46de4d20e-webhook-cert\") pod \"metallb-operator-webhook-server-695488f64b-2qxth\" (UID: \"249e98a9-0d33-4f5c-8102-03e46de4d20e\") " pod="metallb-system/metallb-operator-webhook-server-695488f64b-2qxth" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.449803 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4t8x\" (UniqueName: \"kubernetes.io/projected/249e98a9-0d33-4f5c-8102-03e46de4d20e-kube-api-access-s4t8x\") pod \"metallb-operator-webhook-server-695488f64b-2qxth\" (UID: \"249e98a9-0d33-4f5c-8102-03e46de4d20e\") " pod="metallb-system/metallb-operator-webhook-server-695488f64b-2qxth" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.640401 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-695488f64b-2qxth" Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.664214 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5848764dff-zxvnb"] Dec 05 10:37:35 crc kubenswrapper[4796]: W1205 10:37:35.670308 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a2ba1fb_8eb8_497a_8f31_931aca49243e.slice/crio-33cfd084a93ef1dbc80836667925dec1e5d26dba453931955122b4c0f210aabb WatchSource:0}: Error finding container 33cfd084a93ef1dbc80836667925dec1e5d26dba453931955122b4c0f210aabb: Status 404 returned error can't find the container with id 33cfd084a93ef1dbc80836667925dec1e5d26dba453931955122b4c0f210aabb Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.781740 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-695488f64b-2qxth"] Dec 05 10:37:35 crc kubenswrapper[4796]: W1205 10:37:35.784375 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod249e98a9_0d33_4f5c_8102_03e46de4d20e.slice/crio-a5ab283dd87c76bd0c23e7a366697c9ded18f06390de299de6487901f5690595 WatchSource:0}: Error finding container a5ab283dd87c76bd0c23e7a366697c9ded18f06390de299de6487901f5690595: Status 404 returned error can't find the container with id a5ab283dd87c76bd0c23e7a366697c9ded18f06390de299de6487901f5690595 Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.951515 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-695488f64b-2qxth" event={"ID":"249e98a9-0d33-4f5c-8102-03e46de4d20e","Type":"ContainerStarted","Data":"a5ab283dd87c76bd0c23e7a366697c9ded18f06390de299de6487901f5690595"} Dec 05 10:37:35 crc kubenswrapper[4796]: I1205 10:37:35.952480 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5848764dff-zxvnb" event={"ID":"0a2ba1fb-8eb8-497a-8f31-931aca49243e","Type":"ContainerStarted","Data":"33cfd084a93ef1dbc80836667925dec1e5d26dba453931955122b4c0f210aabb"} Dec 05 10:37:37 crc kubenswrapper[4796]: I1205 10:37:37.971255 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5848764dff-zxvnb" event={"ID":"0a2ba1fb-8eb8-497a-8f31-931aca49243e","Type":"ContainerStarted","Data":"7f26f283d2e5110cfa55399d7f49a7899a688b72df9992f51a8d23a315027be3"} Dec 05 10:37:37 crc kubenswrapper[4796]: I1205 10:37:37.971569 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5848764dff-zxvnb" Dec 05 10:37:37 crc kubenswrapper[4796]: I1205 10:37:37.986382 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5848764dff-zxvnb" podStartSLOduration=2.001381602 podStartE2EDuration="3.986229217s" podCreationTimestamp="2025-12-05 10:37:34 +0000 UTC" firstStartedPulling="2025-12-05 10:37:35.67936187 +0000 UTC m=+601.967467383" lastFinishedPulling="2025-12-05 10:37:37.664209485 +0000 UTC m=+603.952314998" observedRunningTime="2025-12-05 10:37:37.984996788 +0000 UTC m=+604.273102301" watchObservedRunningTime="2025-12-05 10:37:37.986229217 +0000 UTC m=+604.274334730" Dec 05 10:37:38 crc kubenswrapper[4796]: I1205 10:37:38.976625 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-695488f64b-2qxth" event={"ID":"249e98a9-0d33-4f5c-8102-03e46de4d20e","Type":"ContainerStarted","Data":"4aaa3a3e637514840ba2a03af193a981f608a3483004859b554b7e2697eadd9b"} Dec 05 10:37:38 crc kubenswrapper[4796]: I1205 10:37:38.976848 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-695488f64b-2qxth" Dec 05 10:37:38 crc kubenswrapper[4796]: I1205 10:37:38.992911 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-695488f64b-2qxth" podStartSLOduration=0.985332843 podStartE2EDuration="3.992898729s" podCreationTimestamp="2025-12-05 10:37:35 +0000 UTC" firstStartedPulling="2025-12-05 10:37:35.787520387 +0000 UTC m=+602.075625900" lastFinishedPulling="2025-12-05 10:37:38.795086273 +0000 UTC m=+605.083191786" observedRunningTime="2025-12-05 10:37:38.991816413 +0000 UTC m=+605.279921947" watchObservedRunningTime="2025-12-05 10:37:38.992898729 +0000 UTC m=+605.281004242" Dec 05 10:37:55 crc kubenswrapper[4796]: I1205 10:37:55.644762 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-695488f64b-2qxth" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.298974 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5848764dff-zxvnb" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.801023 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-9srrf"] Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.802935 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.803946 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-cvrqt"] Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.804215 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.804301 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.804658 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-cvrqt" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.804819 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-8frfm" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.806621 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.811835 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-cvrqt"] Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.860760 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-q4znh"] Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.861479 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-q4znh" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.863079 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.863168 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-mpzhq" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.863227 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.863412 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.870522 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-rnnld"] Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.871265 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-rnnld" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.874701 4796 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.876870 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-rnnld"] Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.953802 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63-frr-sockets\") pod \"frr-k8s-9srrf\" (UID: \"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63\") " pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.953837 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ad3da70e-b17d-414e-a68f-197abce5d6fe-memberlist\") pod \"speaker-q4znh\" (UID: \"ad3da70e-b17d-414e-a68f-197abce5d6fe\") " pod="metallb-system/speaker-q4znh" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.953863 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad3da70e-b17d-414e-a68f-197abce5d6fe-metrics-certs\") pod \"speaker-q4znh\" (UID: \"ad3da70e-b17d-414e-a68f-197abce5d6fe\") " pod="metallb-system/speaker-q4znh" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.953878 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbwt7\" (UniqueName: \"kubernetes.io/projected/ad3da70e-b17d-414e-a68f-197abce5d6fe-kube-api-access-kbwt7\") pod \"speaker-q4znh\" (UID: \"ad3da70e-b17d-414e-a68f-197abce5d6fe\") " pod="metallb-system/speaker-q4znh" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.953900 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63-reloader\") pod \"frr-k8s-9srrf\" (UID: \"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63\") " pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.953917 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/131fd9f0-c98b-45a8-9443-fb22ab2c6c28-metrics-certs\") pod \"controller-f8648f98b-rnnld\" (UID: \"131fd9f0-c98b-45a8-9443-fb22ab2c6c28\") " pod="metallb-system/controller-f8648f98b-rnnld" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.953944 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kf78\" (UniqueName: \"kubernetes.io/projected/e62fb2ad-7ed8-4e09-bab4-9a5ea8b08610-kube-api-access-9kf78\") pod \"frr-k8s-webhook-server-7fcb986d4-cvrqt\" (UID: \"e62fb2ad-7ed8-4e09-bab4-9a5ea8b08610\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-cvrqt" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.954038 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63-frr-conf\") pod \"frr-k8s-9srrf\" (UID: \"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63\") " pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.954079 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnt9c\" (UniqueName: \"kubernetes.io/projected/131fd9f0-c98b-45a8-9443-fb22ab2c6c28-kube-api-access-xnt9c\") pod \"controller-f8648f98b-rnnld\" (UID: \"131fd9f0-c98b-45a8-9443-fb22ab2c6c28\") " pod="metallb-system/controller-f8648f98b-rnnld" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.954117 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63-frr-startup\") pod \"frr-k8s-9srrf\" (UID: \"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63\") " pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.954150 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63-metrics\") pod \"frr-k8s-9srrf\" (UID: \"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63\") " pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.954167 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ad3da70e-b17d-414e-a68f-197abce5d6fe-metallb-excludel2\") pod \"speaker-q4znh\" (UID: \"ad3da70e-b17d-414e-a68f-197abce5d6fe\") " pod="metallb-system/speaker-q4znh" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.954216 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63-metrics-certs\") pod \"frr-k8s-9srrf\" (UID: \"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63\") " pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.954249 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/131fd9f0-c98b-45a8-9443-fb22ab2c6c28-cert\") pod \"controller-f8648f98b-rnnld\" (UID: \"131fd9f0-c98b-45a8-9443-fb22ab2c6c28\") " pod="metallb-system/controller-f8648f98b-rnnld" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.954277 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e62fb2ad-7ed8-4e09-bab4-9a5ea8b08610-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-cvrqt\" (UID: \"e62fb2ad-7ed8-4e09-bab4-9a5ea8b08610\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-cvrqt" Dec 05 10:38:15 crc kubenswrapper[4796]: I1205 10:38:15.954313 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv95p\" (UniqueName: \"kubernetes.io/projected/24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63-kube-api-access-rv95p\") pod \"frr-k8s-9srrf\" (UID: \"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63\") " pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.055003 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad3da70e-b17d-414e-a68f-197abce5d6fe-metrics-certs\") pod \"speaker-q4znh\" (UID: \"ad3da70e-b17d-414e-a68f-197abce5d6fe\") " pod="metallb-system/speaker-q4znh" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.055047 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbwt7\" (UniqueName: \"kubernetes.io/projected/ad3da70e-b17d-414e-a68f-197abce5d6fe-kube-api-access-kbwt7\") pod \"speaker-q4znh\" (UID: \"ad3da70e-b17d-414e-a68f-197abce5d6fe\") " pod="metallb-system/speaker-q4znh" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.055071 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/131fd9f0-c98b-45a8-9443-fb22ab2c6c28-metrics-certs\") pod \"controller-f8648f98b-rnnld\" (UID: \"131fd9f0-c98b-45a8-9443-fb22ab2c6c28\") " pod="metallb-system/controller-f8648f98b-rnnld" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.055089 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63-reloader\") pod \"frr-k8s-9srrf\" (UID: \"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63\") " pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.055107 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kf78\" (UniqueName: \"kubernetes.io/projected/e62fb2ad-7ed8-4e09-bab4-9a5ea8b08610-kube-api-access-9kf78\") pod \"frr-k8s-webhook-server-7fcb986d4-cvrqt\" (UID: \"e62fb2ad-7ed8-4e09-bab4-9a5ea8b08610\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-cvrqt" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.055123 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63-frr-conf\") pod \"frr-k8s-9srrf\" (UID: \"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63\") " pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.055137 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnt9c\" (UniqueName: \"kubernetes.io/projected/131fd9f0-c98b-45a8-9443-fb22ab2c6c28-kube-api-access-xnt9c\") pod \"controller-f8648f98b-rnnld\" (UID: \"131fd9f0-c98b-45a8-9443-fb22ab2c6c28\") " pod="metallb-system/controller-f8648f98b-rnnld" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.055156 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63-frr-startup\") pod \"frr-k8s-9srrf\" (UID: \"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63\") " pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.055173 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63-metrics\") pod \"frr-k8s-9srrf\" (UID: \"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63\") " pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.055188 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ad3da70e-b17d-414e-a68f-197abce5d6fe-metallb-excludel2\") pod \"speaker-q4znh\" (UID: \"ad3da70e-b17d-414e-a68f-197abce5d6fe\") " pod="metallb-system/speaker-q4znh" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.055207 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63-metrics-certs\") pod \"frr-k8s-9srrf\" (UID: \"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63\") " pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.055225 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/131fd9f0-c98b-45a8-9443-fb22ab2c6c28-cert\") pod \"controller-f8648f98b-rnnld\" (UID: \"131fd9f0-c98b-45a8-9443-fb22ab2c6c28\") " pod="metallb-system/controller-f8648f98b-rnnld" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.055243 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e62fb2ad-7ed8-4e09-bab4-9a5ea8b08610-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-cvrqt\" (UID: \"e62fb2ad-7ed8-4e09-bab4-9a5ea8b08610\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-cvrqt" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.055267 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv95p\" (UniqueName: \"kubernetes.io/projected/24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63-kube-api-access-rv95p\") pod \"frr-k8s-9srrf\" (UID: \"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63\") " pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.055283 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63-frr-sockets\") pod \"frr-k8s-9srrf\" (UID: \"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63\") " pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.055299 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ad3da70e-b17d-414e-a68f-197abce5d6fe-memberlist\") pod \"speaker-q4znh\" (UID: \"ad3da70e-b17d-414e-a68f-197abce5d6fe\") " pod="metallb-system/speaker-q4znh" Dec 05 10:38:16 crc kubenswrapper[4796]: E1205 10:38:16.055373 4796 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 05 10:38:16 crc kubenswrapper[4796]: E1205 10:38:16.055416 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad3da70e-b17d-414e-a68f-197abce5d6fe-memberlist podName:ad3da70e-b17d-414e-a68f-197abce5d6fe nodeName:}" failed. No retries permitted until 2025-12-05 10:38:16.555402731 +0000 UTC m=+642.843508244 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ad3da70e-b17d-414e-a68f-197abce5d6fe-memberlist") pod "speaker-q4znh" (UID: "ad3da70e-b17d-414e-a68f-197abce5d6fe") : secret "metallb-memberlist" not found Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.055522 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63-reloader\") pod \"frr-k8s-9srrf\" (UID: \"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63\") " pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.055700 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63-frr-conf\") pod \"frr-k8s-9srrf\" (UID: \"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63\") " pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.055772 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63-metrics\") pod \"frr-k8s-9srrf\" (UID: \"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63\") " pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.055855 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63-frr-sockets\") pod \"frr-k8s-9srrf\" (UID: \"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63\") " pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.056120 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63-frr-startup\") pod \"frr-k8s-9srrf\" (UID: \"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63\") " pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.056265 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ad3da70e-b17d-414e-a68f-197abce5d6fe-metallb-excludel2\") pod \"speaker-q4znh\" (UID: \"ad3da70e-b17d-414e-a68f-197abce5d6fe\") " pod="metallb-system/speaker-q4znh" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.059563 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/131fd9f0-c98b-45a8-9443-fb22ab2c6c28-metrics-certs\") pod \"controller-f8648f98b-rnnld\" (UID: \"131fd9f0-c98b-45a8-9443-fb22ab2c6c28\") " pod="metallb-system/controller-f8648f98b-rnnld" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.060326 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e62fb2ad-7ed8-4e09-bab4-9a5ea8b08610-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-cvrqt\" (UID: \"e62fb2ad-7ed8-4e09-bab4-9a5ea8b08610\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-cvrqt" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.060986 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63-metrics-certs\") pod \"frr-k8s-9srrf\" (UID: \"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63\") " pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.062999 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/131fd9f0-c98b-45a8-9443-fb22ab2c6c28-cert\") pod \"controller-f8648f98b-rnnld\" (UID: \"131fd9f0-c98b-45a8-9443-fb22ab2c6c28\") " pod="metallb-system/controller-f8648f98b-rnnld" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.065139 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad3da70e-b17d-414e-a68f-197abce5d6fe-metrics-certs\") pod \"speaker-q4znh\" (UID: \"ad3da70e-b17d-414e-a68f-197abce5d6fe\") " pod="metallb-system/speaker-q4znh" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.071959 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbwt7\" (UniqueName: \"kubernetes.io/projected/ad3da70e-b17d-414e-a68f-197abce5d6fe-kube-api-access-kbwt7\") pod \"speaker-q4znh\" (UID: \"ad3da70e-b17d-414e-a68f-197abce5d6fe\") " pod="metallb-system/speaker-q4znh" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.072202 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnt9c\" (UniqueName: \"kubernetes.io/projected/131fd9f0-c98b-45a8-9443-fb22ab2c6c28-kube-api-access-xnt9c\") pod \"controller-f8648f98b-rnnld\" (UID: \"131fd9f0-c98b-45a8-9443-fb22ab2c6c28\") " pod="metallb-system/controller-f8648f98b-rnnld" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.072578 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv95p\" (UniqueName: \"kubernetes.io/projected/24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63-kube-api-access-rv95p\") pod \"frr-k8s-9srrf\" (UID: \"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63\") " pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.072586 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kf78\" (UniqueName: \"kubernetes.io/projected/e62fb2ad-7ed8-4e09-bab4-9a5ea8b08610-kube-api-access-9kf78\") pod \"frr-k8s-webhook-server-7fcb986d4-cvrqt\" (UID: \"e62fb2ad-7ed8-4e09-bab4-9a5ea8b08610\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-cvrqt" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.119933 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.124272 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-cvrqt" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.181487 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-rnnld" Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.476860 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-cvrqt"] Dec 05 10:38:16 crc kubenswrapper[4796]: W1205 10:38:16.480156 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode62fb2ad_7ed8_4e09_bab4_9a5ea8b08610.slice/crio-a67265433329193e2436cf4caa8799f0008137691d48cd0ff57c81660542ccdf WatchSource:0}: Error finding container a67265433329193e2436cf4caa8799f0008137691d48cd0ff57c81660542ccdf: Status 404 returned error can't find the container with id a67265433329193e2436cf4caa8799f0008137691d48cd0ff57c81660542ccdf Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.523498 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-rnnld"] Dec 05 10:38:16 crc kubenswrapper[4796]: W1205 10:38:16.525606 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod131fd9f0_c98b_45a8_9443_fb22ab2c6c28.slice/crio-a5b55636c56252f746bfc96e51f2083069bec555e042344bcc025dfd47382da1 WatchSource:0}: Error finding container a5b55636c56252f746bfc96e51f2083069bec555e042344bcc025dfd47382da1: Status 404 returned error can't find the container with id a5b55636c56252f746bfc96e51f2083069bec555e042344bcc025dfd47382da1 Dec 05 10:38:16 crc kubenswrapper[4796]: I1205 10:38:16.564970 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ad3da70e-b17d-414e-a68f-197abce5d6fe-memberlist\") pod \"speaker-q4znh\" (UID: \"ad3da70e-b17d-414e-a68f-197abce5d6fe\") " pod="metallb-system/speaker-q4znh" Dec 05 10:38:16 crc kubenswrapper[4796]: E1205 10:38:16.565146 4796 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 05 10:38:16 crc kubenswrapper[4796]: E1205 10:38:16.565208 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad3da70e-b17d-414e-a68f-197abce5d6fe-memberlist podName:ad3da70e-b17d-414e-a68f-197abce5d6fe nodeName:}" failed. No retries permitted until 2025-12-05 10:38:17.565194416 +0000 UTC m=+643.853299930 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ad3da70e-b17d-414e-a68f-197abce5d6fe-memberlist") pod "speaker-q4znh" (UID: "ad3da70e-b17d-414e-a68f-197abce5d6fe") : secret "metallb-memberlist" not found Dec 05 10:38:17 crc kubenswrapper[4796]: I1205 10:38:17.133370 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9srrf" event={"ID":"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63","Type":"ContainerStarted","Data":"fec4e9f98e9b93edd80991672dd1ca02d9d39001be9cae716e3f488a3f44ac5b"} Dec 05 10:38:17 crc kubenswrapper[4796]: I1205 10:38:17.135018 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-rnnld" event={"ID":"131fd9f0-c98b-45a8-9443-fb22ab2c6c28","Type":"ContainerStarted","Data":"2b51214d8c3f5e1c0a51e92c6d8e35d0829ef6fd95e4510e34cb6625f324190f"} Dec 05 10:38:17 crc kubenswrapper[4796]: I1205 10:38:17.135045 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-rnnld" event={"ID":"131fd9f0-c98b-45a8-9443-fb22ab2c6c28","Type":"ContainerStarted","Data":"fc46de76d892e3f17932a1a616e75223104b0b8ed206933ec13ee9fae7841795"} Dec 05 10:38:17 crc kubenswrapper[4796]: I1205 10:38:17.135055 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-rnnld" event={"ID":"131fd9f0-c98b-45a8-9443-fb22ab2c6c28","Type":"ContainerStarted","Data":"a5b55636c56252f746bfc96e51f2083069bec555e042344bcc025dfd47382da1"} Dec 05 10:38:17 crc kubenswrapper[4796]: I1205 10:38:17.135082 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-rnnld" Dec 05 10:38:17 crc kubenswrapper[4796]: I1205 10:38:17.135890 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-cvrqt" event={"ID":"e62fb2ad-7ed8-4e09-bab4-9a5ea8b08610","Type":"ContainerStarted","Data":"a67265433329193e2436cf4caa8799f0008137691d48cd0ff57c81660542ccdf"} Dec 05 10:38:17 crc kubenswrapper[4796]: I1205 10:38:17.155168 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-rnnld" podStartSLOduration=2.15515298 podStartE2EDuration="2.15515298s" podCreationTimestamp="2025-12-05 10:38:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:38:17.151843974 +0000 UTC m=+643.439949487" watchObservedRunningTime="2025-12-05 10:38:17.15515298 +0000 UTC m=+643.443258494" Dec 05 10:38:17 crc kubenswrapper[4796]: I1205 10:38:17.574827 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ad3da70e-b17d-414e-a68f-197abce5d6fe-memberlist\") pod \"speaker-q4znh\" (UID: \"ad3da70e-b17d-414e-a68f-197abce5d6fe\") " pod="metallb-system/speaker-q4znh" Dec 05 10:38:17 crc kubenswrapper[4796]: I1205 10:38:17.579183 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ad3da70e-b17d-414e-a68f-197abce5d6fe-memberlist\") pod \"speaker-q4znh\" (UID: \"ad3da70e-b17d-414e-a68f-197abce5d6fe\") " pod="metallb-system/speaker-q4znh" Dec 05 10:38:17 crc kubenswrapper[4796]: I1205 10:38:17.670835 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-q4znh" Dec 05 10:38:18 crc kubenswrapper[4796]: I1205 10:38:18.142346 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-q4znh" event={"ID":"ad3da70e-b17d-414e-a68f-197abce5d6fe","Type":"ContainerStarted","Data":"4f53710e3f6d9dd64fc49928c17c73be932da7d263ae55dc751750e5f10d3d0f"} Dec 05 10:38:18 crc kubenswrapper[4796]: I1205 10:38:18.142390 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-q4znh" event={"ID":"ad3da70e-b17d-414e-a68f-197abce5d6fe","Type":"ContainerStarted","Data":"5319acb5b3ac4bd6aac770bb034b07a7fc7f29f74311a0ced83f0c21a4e03e75"} Dec 05 10:38:18 crc kubenswrapper[4796]: I1205 10:38:18.142402 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-q4znh" event={"ID":"ad3da70e-b17d-414e-a68f-197abce5d6fe","Type":"ContainerStarted","Data":"e7401f953df537dee486dc80f5dff4dad10a8f1e7083fe8fccba4beecba65564"} Dec 05 10:38:18 crc kubenswrapper[4796]: I1205 10:38:18.142546 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-q4znh" Dec 05 10:38:18 crc kubenswrapper[4796]: I1205 10:38:18.155996 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-q4znh" podStartSLOduration=3.155983506 podStartE2EDuration="3.155983506s" podCreationTimestamp="2025-12-05 10:38:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:38:18.155075547 +0000 UTC m=+644.443181061" watchObservedRunningTime="2025-12-05 10:38:18.155983506 +0000 UTC m=+644.444089019" Dec 05 10:38:23 crc kubenswrapper[4796]: I1205 10:38:23.168449 4796 generic.go:334] "Generic (PLEG): container finished" podID="24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63" containerID="f6e2988fffa685956900e26ff657c1ce7e1360ab41e5eee66932652016e0d697" exitCode=0 Dec 05 10:38:23 crc kubenswrapper[4796]: I1205 10:38:23.168498 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9srrf" event={"ID":"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63","Type":"ContainerDied","Data":"f6e2988fffa685956900e26ff657c1ce7e1360ab41e5eee66932652016e0d697"} Dec 05 10:38:23 crc kubenswrapper[4796]: I1205 10:38:23.170019 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-cvrqt" event={"ID":"e62fb2ad-7ed8-4e09-bab4-9a5ea8b08610","Type":"ContainerStarted","Data":"1cb51dc2d864472cd619d106ac16806c6bd32f45143be9120ad566bf59890a87"} Dec 05 10:38:23 crc kubenswrapper[4796]: I1205 10:38:23.170144 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-cvrqt" Dec 05 10:38:23 crc kubenswrapper[4796]: I1205 10:38:23.198109 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-cvrqt" podStartSLOduration=2.362626089 podStartE2EDuration="8.198094957s" podCreationTimestamp="2025-12-05 10:38:15 +0000 UTC" firstStartedPulling="2025-12-05 10:38:16.482128795 +0000 UTC m=+642.770234308" lastFinishedPulling="2025-12-05 10:38:22.317597663 +0000 UTC m=+648.605703176" observedRunningTime="2025-12-05 10:38:23.196023809 +0000 UTC m=+649.484129322" watchObservedRunningTime="2025-12-05 10:38:23.198094957 +0000 UTC m=+649.486200469" Dec 05 10:38:24 crc kubenswrapper[4796]: I1205 10:38:24.176354 4796 generic.go:334] "Generic (PLEG): container finished" podID="24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63" containerID="045be08274d78422036babfe9b1b087055c964f10e24e432c02bc6d39ae9b27b" exitCode=0 Dec 05 10:38:24 crc kubenswrapper[4796]: I1205 10:38:24.176400 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9srrf" event={"ID":"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63","Type":"ContainerDied","Data":"045be08274d78422036babfe9b1b087055c964f10e24e432c02bc6d39ae9b27b"} Dec 05 10:38:25 crc kubenswrapper[4796]: I1205 10:38:25.181796 4796 generic.go:334] "Generic (PLEG): container finished" podID="24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63" containerID="1a2817290ac793ac10e7f1f94557eed363f424d10ec402a2c8bd9113967bbb4e" exitCode=0 Dec 05 10:38:25 crc kubenswrapper[4796]: I1205 10:38:25.181859 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9srrf" event={"ID":"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63","Type":"ContainerDied","Data":"1a2817290ac793ac10e7f1f94557eed363f424d10ec402a2c8bd9113967bbb4e"} Dec 05 10:38:26 crc kubenswrapper[4796]: I1205 10:38:26.184976 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-rnnld" Dec 05 10:38:26 crc kubenswrapper[4796]: I1205 10:38:26.189639 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9srrf" event={"ID":"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63","Type":"ContainerStarted","Data":"628024d39d7bc71a55e321bc39cfdaa41347b277ded8b4edd6f46973ea3683f9"} Dec 05 10:38:26 crc kubenswrapper[4796]: I1205 10:38:26.189674 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9srrf" event={"ID":"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63","Type":"ContainerStarted","Data":"89af6d2a0ac7bc02326720e12efc12521b16349aed5235ad203b24498859579c"} Dec 05 10:38:26 crc kubenswrapper[4796]: I1205 10:38:26.189707 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9srrf" event={"ID":"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63","Type":"ContainerStarted","Data":"93a23e10f2d3b3afbb0cf43c89b9738aec806b44bb23854273e1d05819792590"} Dec 05 10:38:26 crc kubenswrapper[4796]: I1205 10:38:26.189717 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9srrf" event={"ID":"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63","Type":"ContainerStarted","Data":"918bc2e5256c4f6c938ffd314824816a74569233078d4475c01bd66c320e3803"} Dec 05 10:38:26 crc kubenswrapper[4796]: I1205 10:38:26.189724 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9srrf" event={"ID":"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63","Type":"ContainerStarted","Data":"36a98712921d049938fc146b28d6a61f356d6f2b3534e78536c63ee2cfe21e5e"} Dec 05 10:38:26 crc kubenswrapper[4796]: I1205 10:38:26.189732 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9srrf" event={"ID":"24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63","Type":"ContainerStarted","Data":"16e9d84fd521aa7c41a2a31e39ce47a90cd34b83dba03203400994cc6d093d59"} Dec 05 10:38:26 crc kubenswrapper[4796]: I1205 10:38:26.189788 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:26 crc kubenswrapper[4796]: I1205 10:38:26.213582 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-9srrf" podStartSLOduration=5.139114495 podStartE2EDuration="11.213564114s" podCreationTimestamp="2025-12-05 10:38:15 +0000 UTC" firstStartedPulling="2025-12-05 10:38:16.240731797 +0000 UTC m=+642.528837311" lastFinishedPulling="2025-12-05 10:38:22.315181416 +0000 UTC m=+648.603286930" observedRunningTime="2025-12-05 10:38:26.210416923 +0000 UTC m=+652.498522436" watchObservedRunningTime="2025-12-05 10:38:26.213564114 +0000 UTC m=+652.501669626" Dec 05 10:38:27 crc kubenswrapper[4796]: I1205 10:38:27.673699 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-q4znh" Dec 05 10:38:29 crc kubenswrapper[4796]: I1205 10:38:29.789560 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cxr7s"] Dec 05 10:38:29 crc kubenswrapper[4796]: I1205 10:38:29.790443 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cxr7s" Dec 05 10:38:29 crc kubenswrapper[4796]: I1205 10:38:29.791987 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 05 10:38:29 crc kubenswrapper[4796]: I1205 10:38:29.795702 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cxr7s"] Dec 05 10:38:29 crc kubenswrapper[4796]: I1205 10:38:29.797329 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-xpl4w" Dec 05 10:38:29 crc kubenswrapper[4796]: I1205 10:38:29.797340 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 05 10:38:29 crc kubenswrapper[4796]: I1205 10:38:29.917919 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqftc\" (UniqueName: \"kubernetes.io/projected/e5e3bf52-da34-435e-9663-13bd52ac6997-kube-api-access-bqftc\") pod \"openstack-operator-index-cxr7s\" (UID: \"e5e3bf52-da34-435e-9663-13bd52ac6997\") " pod="openstack-operators/openstack-operator-index-cxr7s" Dec 05 10:38:30 crc kubenswrapper[4796]: I1205 10:38:30.018784 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqftc\" (UniqueName: \"kubernetes.io/projected/e5e3bf52-da34-435e-9663-13bd52ac6997-kube-api-access-bqftc\") pod \"openstack-operator-index-cxr7s\" (UID: \"e5e3bf52-da34-435e-9663-13bd52ac6997\") " pod="openstack-operators/openstack-operator-index-cxr7s" Dec 05 10:38:30 crc kubenswrapper[4796]: I1205 10:38:30.033281 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqftc\" (UniqueName: \"kubernetes.io/projected/e5e3bf52-da34-435e-9663-13bd52ac6997-kube-api-access-bqftc\") pod \"openstack-operator-index-cxr7s\" (UID: \"e5e3bf52-da34-435e-9663-13bd52ac6997\") " pod="openstack-operators/openstack-operator-index-cxr7s" Dec 05 10:38:30 crc kubenswrapper[4796]: I1205 10:38:30.104930 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cxr7s" Dec 05 10:38:30 crc kubenswrapper[4796]: I1205 10:38:30.450058 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cxr7s"] Dec 05 10:38:30 crc kubenswrapper[4796]: W1205 10:38:30.453819 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5e3bf52_da34_435e_9663_13bd52ac6997.slice/crio-2f2f28d89435b84ff80fae6b7d82f7b216afeb7661c1cf31b656bc99ddc882ed WatchSource:0}: Error finding container 2f2f28d89435b84ff80fae6b7d82f7b216afeb7661c1cf31b656bc99ddc882ed: Status 404 returned error can't find the container with id 2f2f28d89435b84ff80fae6b7d82f7b216afeb7661c1cf31b656bc99ddc882ed Dec 05 10:38:31 crc kubenswrapper[4796]: I1205 10:38:31.120790 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:31 crc kubenswrapper[4796]: I1205 10:38:31.148705 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:31 crc kubenswrapper[4796]: I1205 10:38:31.215194 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cxr7s" event={"ID":"e5e3bf52-da34-435e-9663-13bd52ac6997","Type":"ContainerStarted","Data":"2f2f28d89435b84ff80fae6b7d82f7b216afeb7661c1cf31b656bc99ddc882ed"} Dec 05 10:38:32 crc kubenswrapper[4796]: I1205 10:38:32.224443 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cxr7s" event={"ID":"e5e3bf52-da34-435e-9663-13bd52ac6997","Type":"ContainerStarted","Data":"7105d7d392da739d7286d55c87df23adb080afded1f4517a928baa6b9c260126"} Dec 05 10:38:32 crc kubenswrapper[4796]: I1205 10:38:32.244418 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cxr7s" podStartSLOduration=2.403596716 podStartE2EDuration="3.244404225s" podCreationTimestamp="2025-12-05 10:38:29 +0000 UTC" firstStartedPulling="2025-12-05 10:38:30.455502247 +0000 UTC m=+656.743607759" lastFinishedPulling="2025-12-05 10:38:31.296309756 +0000 UTC m=+657.584415268" observedRunningTime="2025-12-05 10:38:32.243559266 +0000 UTC m=+658.531664779" watchObservedRunningTime="2025-12-05 10:38:32.244404225 +0000 UTC m=+658.532509739" Dec 05 10:38:33 crc kubenswrapper[4796]: I1205 10:38:33.174127 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-cxr7s"] Dec 05 10:38:33 crc kubenswrapper[4796]: I1205 10:38:33.779483 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-gk94f"] Dec 05 10:38:33 crc kubenswrapper[4796]: I1205 10:38:33.780131 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gk94f" Dec 05 10:38:33 crc kubenswrapper[4796]: I1205 10:38:33.786967 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gk94f"] Dec 05 10:38:33 crc kubenswrapper[4796]: I1205 10:38:33.868109 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nrxd\" (UniqueName: \"kubernetes.io/projected/cc045b46-7b1a-4a74-9bcc-9fdf067dbf3d-kube-api-access-8nrxd\") pod \"openstack-operator-index-gk94f\" (UID: \"cc045b46-7b1a-4a74-9bcc-9fdf067dbf3d\") " pod="openstack-operators/openstack-operator-index-gk94f" Dec 05 10:38:33 crc kubenswrapper[4796]: I1205 10:38:33.969485 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nrxd\" (UniqueName: \"kubernetes.io/projected/cc045b46-7b1a-4a74-9bcc-9fdf067dbf3d-kube-api-access-8nrxd\") pod \"openstack-operator-index-gk94f\" (UID: \"cc045b46-7b1a-4a74-9bcc-9fdf067dbf3d\") " pod="openstack-operators/openstack-operator-index-gk94f" Dec 05 10:38:33 crc kubenswrapper[4796]: I1205 10:38:33.984735 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nrxd\" (UniqueName: \"kubernetes.io/projected/cc045b46-7b1a-4a74-9bcc-9fdf067dbf3d-kube-api-access-8nrxd\") pod \"openstack-operator-index-gk94f\" (UID: \"cc045b46-7b1a-4a74-9bcc-9fdf067dbf3d\") " pod="openstack-operators/openstack-operator-index-gk94f" Dec 05 10:38:34 crc kubenswrapper[4796]: I1205 10:38:34.093182 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gk94f" Dec 05 10:38:34 crc kubenswrapper[4796]: I1205 10:38:34.232844 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-cxr7s" podUID="e5e3bf52-da34-435e-9663-13bd52ac6997" containerName="registry-server" containerID="cri-o://7105d7d392da739d7286d55c87df23adb080afded1f4517a928baa6b9c260126" gracePeriod=2 Dec 05 10:38:34 crc kubenswrapper[4796]: I1205 10:38:34.455245 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gk94f"] Dec 05 10:38:34 crc kubenswrapper[4796]: W1205 10:38:34.457611 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc045b46_7b1a_4a74_9bcc_9fdf067dbf3d.slice/crio-435324aac2677dd8815bdd53b0a14ad9ad6ad36974dbb78d5acb2f7fe1233021 WatchSource:0}: Error finding container 435324aac2677dd8815bdd53b0a14ad9ad6ad36974dbb78d5acb2f7fe1233021: Status 404 returned error can't find the container with id 435324aac2677dd8815bdd53b0a14ad9ad6ad36974dbb78d5acb2f7fe1233021 Dec 05 10:38:34 crc kubenswrapper[4796]: I1205 10:38:34.522134 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cxr7s" Dec 05 10:38:34 crc kubenswrapper[4796]: I1205 10:38:34.678146 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqftc\" (UniqueName: \"kubernetes.io/projected/e5e3bf52-da34-435e-9663-13bd52ac6997-kube-api-access-bqftc\") pod \"e5e3bf52-da34-435e-9663-13bd52ac6997\" (UID: \"e5e3bf52-da34-435e-9663-13bd52ac6997\") " Dec 05 10:38:34 crc kubenswrapper[4796]: I1205 10:38:34.683968 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5e3bf52-da34-435e-9663-13bd52ac6997-kube-api-access-bqftc" (OuterVolumeSpecName: "kube-api-access-bqftc") pod "e5e3bf52-da34-435e-9663-13bd52ac6997" (UID: "e5e3bf52-da34-435e-9663-13bd52ac6997"). InnerVolumeSpecName "kube-api-access-bqftc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:38:34 crc kubenswrapper[4796]: I1205 10:38:34.780162 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqftc\" (UniqueName: \"kubernetes.io/projected/e5e3bf52-da34-435e-9663-13bd52ac6997-kube-api-access-bqftc\") on node \"crc\" DevicePath \"\"" Dec 05 10:38:35 crc kubenswrapper[4796]: I1205 10:38:35.240316 4796 generic.go:334] "Generic (PLEG): container finished" podID="e5e3bf52-da34-435e-9663-13bd52ac6997" containerID="7105d7d392da739d7286d55c87df23adb080afded1f4517a928baa6b9c260126" exitCode=0 Dec 05 10:38:35 crc kubenswrapper[4796]: I1205 10:38:35.240382 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cxr7s" Dec 05 10:38:35 crc kubenswrapper[4796]: I1205 10:38:35.240404 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cxr7s" event={"ID":"e5e3bf52-da34-435e-9663-13bd52ac6997","Type":"ContainerDied","Data":"7105d7d392da739d7286d55c87df23adb080afded1f4517a928baa6b9c260126"} Dec 05 10:38:35 crc kubenswrapper[4796]: I1205 10:38:35.241066 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cxr7s" event={"ID":"e5e3bf52-da34-435e-9663-13bd52ac6997","Type":"ContainerDied","Data":"2f2f28d89435b84ff80fae6b7d82f7b216afeb7661c1cf31b656bc99ddc882ed"} Dec 05 10:38:35 crc kubenswrapper[4796]: I1205 10:38:35.241137 4796 scope.go:117] "RemoveContainer" containerID="7105d7d392da739d7286d55c87df23adb080afded1f4517a928baa6b9c260126" Dec 05 10:38:35 crc kubenswrapper[4796]: I1205 10:38:35.243058 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gk94f" event={"ID":"cc045b46-7b1a-4a74-9bcc-9fdf067dbf3d","Type":"ContainerStarted","Data":"b89f834a096955927026e9dadc84a2acd0fb9d8c05a83b896c65bc5051a26d07"} Dec 05 10:38:35 crc kubenswrapper[4796]: I1205 10:38:35.243144 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gk94f" event={"ID":"cc045b46-7b1a-4a74-9bcc-9fdf067dbf3d","Type":"ContainerStarted","Data":"435324aac2677dd8815bdd53b0a14ad9ad6ad36974dbb78d5acb2f7fe1233021"} Dec 05 10:38:35 crc kubenswrapper[4796]: I1205 10:38:35.253147 4796 scope.go:117] "RemoveContainer" containerID="7105d7d392da739d7286d55c87df23adb080afded1f4517a928baa6b9c260126" Dec 05 10:38:35 crc kubenswrapper[4796]: E1205 10:38:35.253584 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7105d7d392da739d7286d55c87df23adb080afded1f4517a928baa6b9c260126\": container with ID starting with 7105d7d392da739d7286d55c87df23adb080afded1f4517a928baa6b9c260126 not found: ID does not exist" containerID="7105d7d392da739d7286d55c87df23adb080afded1f4517a928baa6b9c260126" Dec 05 10:38:35 crc kubenswrapper[4796]: I1205 10:38:35.253622 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7105d7d392da739d7286d55c87df23adb080afded1f4517a928baa6b9c260126"} err="failed to get container status \"7105d7d392da739d7286d55c87df23adb080afded1f4517a928baa6b9c260126\": rpc error: code = NotFound desc = could not find container \"7105d7d392da739d7286d55c87df23adb080afded1f4517a928baa6b9c260126\": container with ID starting with 7105d7d392da739d7286d55c87df23adb080afded1f4517a928baa6b9c260126 not found: ID does not exist" Dec 05 10:38:35 crc kubenswrapper[4796]: I1205 10:38:35.255929 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-gk94f" podStartSLOduration=1.752802153 podStartE2EDuration="2.255913212s" podCreationTimestamp="2025-12-05 10:38:33 +0000 UTC" firstStartedPulling="2025-12-05 10:38:34.459711049 +0000 UTC m=+660.747816562" lastFinishedPulling="2025-12-05 10:38:34.962822118 +0000 UTC m=+661.250927621" observedRunningTime="2025-12-05 10:38:35.255060117 +0000 UTC m=+661.543165630" watchObservedRunningTime="2025-12-05 10:38:35.255913212 +0000 UTC m=+661.544018725" Dec 05 10:38:35 crc kubenswrapper[4796]: I1205 10:38:35.265740 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-cxr7s"] Dec 05 10:38:35 crc kubenswrapper[4796]: I1205 10:38:35.267402 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-cxr7s"] Dec 05 10:38:36 crc kubenswrapper[4796]: I1205 10:38:36.036717 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5e3bf52-da34-435e-9663-13bd52ac6997" path="/var/lib/kubelet/pods/e5e3bf52-da34-435e-9663-13bd52ac6997/volumes" Dec 05 10:38:36 crc kubenswrapper[4796]: I1205 10:38:36.123422 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-9srrf" Dec 05 10:38:36 crc kubenswrapper[4796]: I1205 10:38:36.128647 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-cvrqt" Dec 05 10:38:44 crc kubenswrapper[4796]: I1205 10:38:44.093413 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-gk94f" Dec 05 10:38:44 crc kubenswrapper[4796]: I1205 10:38:44.094367 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-gk94f" Dec 05 10:38:44 crc kubenswrapper[4796]: I1205 10:38:44.115933 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-gk94f" Dec 05 10:38:44 crc kubenswrapper[4796]: I1205 10:38:44.304855 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-gk94f" Dec 05 10:38:45 crc kubenswrapper[4796]: I1205 10:38:45.402852 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb"] Dec 05 10:38:45 crc kubenswrapper[4796]: E1205 10:38:45.403079 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5e3bf52-da34-435e-9663-13bd52ac6997" containerName="registry-server" Dec 05 10:38:45 crc kubenswrapper[4796]: I1205 10:38:45.403091 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5e3bf52-da34-435e-9663-13bd52ac6997" containerName="registry-server" Dec 05 10:38:45 crc kubenswrapper[4796]: I1205 10:38:45.403188 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5e3bf52-da34-435e-9663-13bd52ac6997" containerName="registry-server" Dec 05 10:38:45 crc kubenswrapper[4796]: I1205 10:38:45.403910 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb" Dec 05 10:38:45 crc kubenswrapper[4796]: I1205 10:38:45.405241 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-zbgjl" Dec 05 10:38:45 crc kubenswrapper[4796]: I1205 10:38:45.409389 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb"] Dec 05 10:38:45 crc kubenswrapper[4796]: I1205 10:38:45.505364 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfxgt\" (UniqueName: \"kubernetes.io/projected/ce917c34-c2b6-4c47-ac86-8f9bd3e903a2-kube-api-access-pfxgt\") pod \"1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb\" (UID: \"ce917c34-c2b6-4c47-ac86-8f9bd3e903a2\") " pod="openstack-operators/1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb" Dec 05 10:38:45 crc kubenswrapper[4796]: I1205 10:38:45.505677 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce917c34-c2b6-4c47-ac86-8f9bd3e903a2-util\") pod \"1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb\" (UID: \"ce917c34-c2b6-4c47-ac86-8f9bd3e903a2\") " pod="openstack-operators/1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb" Dec 05 10:38:45 crc kubenswrapper[4796]: I1205 10:38:45.505727 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce917c34-c2b6-4c47-ac86-8f9bd3e903a2-bundle\") pod \"1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb\" (UID: \"ce917c34-c2b6-4c47-ac86-8f9bd3e903a2\") " pod="openstack-operators/1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb" Dec 05 10:38:45 crc kubenswrapper[4796]: I1205 10:38:45.606622 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfxgt\" (UniqueName: \"kubernetes.io/projected/ce917c34-c2b6-4c47-ac86-8f9bd3e903a2-kube-api-access-pfxgt\") pod \"1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb\" (UID: \"ce917c34-c2b6-4c47-ac86-8f9bd3e903a2\") " pod="openstack-operators/1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb" Dec 05 10:38:45 crc kubenswrapper[4796]: I1205 10:38:45.606658 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce917c34-c2b6-4c47-ac86-8f9bd3e903a2-util\") pod \"1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb\" (UID: \"ce917c34-c2b6-4c47-ac86-8f9bd3e903a2\") " pod="openstack-operators/1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb" Dec 05 10:38:45 crc kubenswrapper[4796]: I1205 10:38:45.606697 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce917c34-c2b6-4c47-ac86-8f9bd3e903a2-bundle\") pod \"1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb\" (UID: \"ce917c34-c2b6-4c47-ac86-8f9bd3e903a2\") " pod="openstack-operators/1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb" Dec 05 10:38:45 crc kubenswrapper[4796]: I1205 10:38:45.607348 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce917c34-c2b6-4c47-ac86-8f9bd3e903a2-bundle\") pod \"1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb\" (UID: \"ce917c34-c2b6-4c47-ac86-8f9bd3e903a2\") " pod="openstack-operators/1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb" Dec 05 10:38:45 crc kubenswrapper[4796]: I1205 10:38:45.607391 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce917c34-c2b6-4c47-ac86-8f9bd3e903a2-util\") pod \"1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb\" (UID: \"ce917c34-c2b6-4c47-ac86-8f9bd3e903a2\") " pod="openstack-operators/1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb" Dec 05 10:38:45 crc kubenswrapper[4796]: I1205 10:38:45.621606 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfxgt\" (UniqueName: \"kubernetes.io/projected/ce917c34-c2b6-4c47-ac86-8f9bd3e903a2-kube-api-access-pfxgt\") pod \"1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb\" (UID: \"ce917c34-c2b6-4c47-ac86-8f9bd3e903a2\") " pod="openstack-operators/1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb" Dec 05 10:38:45 crc kubenswrapper[4796]: I1205 10:38:45.721801 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb" Dec 05 10:38:46 crc kubenswrapper[4796]: I1205 10:38:46.081150 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb"] Dec 05 10:38:46 crc kubenswrapper[4796]: W1205 10:38:46.084284 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce917c34_c2b6_4c47_ac86_8f9bd3e903a2.slice/crio-e642cc1d963c13dd90312ccb2628352d6d0cc70dda30f541008f4f6b49e73591 WatchSource:0}: Error finding container e642cc1d963c13dd90312ccb2628352d6d0cc70dda30f541008f4f6b49e73591: Status 404 returned error can't find the container with id e642cc1d963c13dd90312ccb2628352d6d0cc70dda30f541008f4f6b49e73591 Dec 05 10:38:46 crc kubenswrapper[4796]: I1205 10:38:46.292410 4796 generic.go:334] "Generic (PLEG): container finished" podID="ce917c34-c2b6-4c47-ac86-8f9bd3e903a2" containerID="40ef79ef8c5c70f47298aa4e4ffcff3bb9c746ac6a536749fe471461ffc1f39b" exitCode=0 Dec 05 10:38:46 crc kubenswrapper[4796]: I1205 10:38:46.292505 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb" event={"ID":"ce917c34-c2b6-4c47-ac86-8f9bd3e903a2","Type":"ContainerDied","Data":"40ef79ef8c5c70f47298aa4e4ffcff3bb9c746ac6a536749fe471461ffc1f39b"} Dec 05 10:38:46 crc kubenswrapper[4796]: I1205 10:38:46.292531 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb" event={"ID":"ce917c34-c2b6-4c47-ac86-8f9bd3e903a2","Type":"ContainerStarted","Data":"e642cc1d963c13dd90312ccb2628352d6d0cc70dda30f541008f4f6b49e73591"} Dec 05 10:38:47 crc kubenswrapper[4796]: I1205 10:38:47.298236 4796 generic.go:334] "Generic (PLEG): container finished" podID="ce917c34-c2b6-4c47-ac86-8f9bd3e903a2" containerID="f5271a268b3ce9ecd459d8585434e42a9080adb5adbc57306e7110f42339f751" exitCode=0 Dec 05 10:38:47 crc kubenswrapper[4796]: I1205 10:38:47.298288 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb" event={"ID":"ce917c34-c2b6-4c47-ac86-8f9bd3e903a2","Type":"ContainerDied","Data":"f5271a268b3ce9ecd459d8585434e42a9080adb5adbc57306e7110f42339f751"} Dec 05 10:38:48 crc kubenswrapper[4796]: I1205 10:38:48.305185 4796 generic.go:334] "Generic (PLEG): container finished" podID="ce917c34-c2b6-4c47-ac86-8f9bd3e903a2" containerID="11966a00dcf3d08fb80c58e132a02b08e011d8093f2ca014677dc57f9fe3d92a" exitCode=0 Dec 05 10:38:48 crc kubenswrapper[4796]: I1205 10:38:48.305224 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb" event={"ID":"ce917c34-c2b6-4c47-ac86-8f9bd3e903a2","Type":"ContainerDied","Data":"11966a00dcf3d08fb80c58e132a02b08e011d8093f2ca014677dc57f9fe3d92a"} Dec 05 10:38:49 crc kubenswrapper[4796]: I1205 10:38:49.486046 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb" Dec 05 10:38:49 crc kubenswrapper[4796]: I1205 10:38:49.550119 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfxgt\" (UniqueName: \"kubernetes.io/projected/ce917c34-c2b6-4c47-ac86-8f9bd3e903a2-kube-api-access-pfxgt\") pod \"ce917c34-c2b6-4c47-ac86-8f9bd3e903a2\" (UID: \"ce917c34-c2b6-4c47-ac86-8f9bd3e903a2\") " Dec 05 10:38:49 crc kubenswrapper[4796]: I1205 10:38:49.550173 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce917c34-c2b6-4c47-ac86-8f9bd3e903a2-util\") pod \"ce917c34-c2b6-4c47-ac86-8f9bd3e903a2\" (UID: \"ce917c34-c2b6-4c47-ac86-8f9bd3e903a2\") " Dec 05 10:38:49 crc kubenswrapper[4796]: I1205 10:38:49.550203 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce917c34-c2b6-4c47-ac86-8f9bd3e903a2-bundle\") pod \"ce917c34-c2b6-4c47-ac86-8f9bd3e903a2\" (UID: \"ce917c34-c2b6-4c47-ac86-8f9bd3e903a2\") " Dec 05 10:38:49 crc kubenswrapper[4796]: I1205 10:38:49.550864 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce917c34-c2b6-4c47-ac86-8f9bd3e903a2-bundle" (OuterVolumeSpecName: "bundle") pod "ce917c34-c2b6-4c47-ac86-8f9bd3e903a2" (UID: "ce917c34-c2b6-4c47-ac86-8f9bd3e903a2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:38:49 crc kubenswrapper[4796]: I1205 10:38:49.556231 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce917c34-c2b6-4c47-ac86-8f9bd3e903a2-kube-api-access-pfxgt" (OuterVolumeSpecName: "kube-api-access-pfxgt") pod "ce917c34-c2b6-4c47-ac86-8f9bd3e903a2" (UID: "ce917c34-c2b6-4c47-ac86-8f9bd3e903a2"). InnerVolumeSpecName "kube-api-access-pfxgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:38:49 crc kubenswrapper[4796]: I1205 10:38:49.561315 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce917c34-c2b6-4c47-ac86-8f9bd3e903a2-util" (OuterVolumeSpecName: "util") pod "ce917c34-c2b6-4c47-ac86-8f9bd3e903a2" (UID: "ce917c34-c2b6-4c47-ac86-8f9bd3e903a2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:38:49 crc kubenswrapper[4796]: I1205 10:38:49.651604 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfxgt\" (UniqueName: \"kubernetes.io/projected/ce917c34-c2b6-4c47-ac86-8f9bd3e903a2-kube-api-access-pfxgt\") on node \"crc\" DevicePath \"\"" Dec 05 10:38:49 crc kubenswrapper[4796]: I1205 10:38:49.651631 4796 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce917c34-c2b6-4c47-ac86-8f9bd3e903a2-util\") on node \"crc\" DevicePath \"\"" Dec 05 10:38:49 crc kubenswrapper[4796]: I1205 10:38:49.651643 4796 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce917c34-c2b6-4c47-ac86-8f9bd3e903a2-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:38:50 crc kubenswrapper[4796]: I1205 10:38:50.315579 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb" event={"ID":"ce917c34-c2b6-4c47-ac86-8f9bd3e903a2","Type":"ContainerDied","Data":"e642cc1d963c13dd90312ccb2628352d6d0cc70dda30f541008f4f6b49e73591"} Dec 05 10:38:50 crc kubenswrapper[4796]: I1205 10:38:50.315907 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e642cc1d963c13dd90312ccb2628352d6d0cc70dda30f541008f4f6b49e73591" Dec 05 10:38:50 crc kubenswrapper[4796]: I1205 10:38:50.315631 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb" Dec 05 10:38:58 crc kubenswrapper[4796]: I1205 10:38:58.132437 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6b8c9fb9c8-7slmm"] Dec 05 10:38:58 crc kubenswrapper[4796]: E1205 10:38:58.132967 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce917c34-c2b6-4c47-ac86-8f9bd3e903a2" containerName="util" Dec 05 10:38:58 crc kubenswrapper[4796]: I1205 10:38:58.132979 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce917c34-c2b6-4c47-ac86-8f9bd3e903a2" containerName="util" Dec 05 10:38:58 crc kubenswrapper[4796]: E1205 10:38:58.133004 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce917c34-c2b6-4c47-ac86-8f9bd3e903a2" containerName="extract" Dec 05 10:38:58 crc kubenswrapper[4796]: I1205 10:38:58.133010 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce917c34-c2b6-4c47-ac86-8f9bd3e903a2" containerName="extract" Dec 05 10:38:58 crc kubenswrapper[4796]: E1205 10:38:58.133020 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce917c34-c2b6-4c47-ac86-8f9bd3e903a2" containerName="pull" Dec 05 10:38:58 crc kubenswrapper[4796]: I1205 10:38:58.133025 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce917c34-c2b6-4c47-ac86-8f9bd3e903a2" containerName="pull" Dec 05 10:38:58 crc kubenswrapper[4796]: I1205 10:38:58.133106 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce917c34-c2b6-4c47-ac86-8f9bd3e903a2" containerName="extract" Dec 05 10:38:58 crc kubenswrapper[4796]: I1205 10:38:58.133600 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6b8c9fb9c8-7slmm" Dec 05 10:38:58 crc kubenswrapper[4796]: I1205 10:38:58.135029 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-gv2kt" Dec 05 10:38:58 crc kubenswrapper[4796]: I1205 10:38:58.156196 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6b8c9fb9c8-7slmm"] Dec 05 10:38:58 crc kubenswrapper[4796]: I1205 10:38:58.239017 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2p5l\" (UniqueName: \"kubernetes.io/projected/914b4688-6153-4282-8828-65d9500a53bf-kube-api-access-s2p5l\") pod \"openstack-operator-controller-operator-6b8c9fb9c8-7slmm\" (UID: \"914b4688-6153-4282-8828-65d9500a53bf\") " pod="openstack-operators/openstack-operator-controller-operator-6b8c9fb9c8-7slmm" Dec 05 10:38:58 crc kubenswrapper[4796]: I1205 10:38:58.339765 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2p5l\" (UniqueName: \"kubernetes.io/projected/914b4688-6153-4282-8828-65d9500a53bf-kube-api-access-s2p5l\") pod \"openstack-operator-controller-operator-6b8c9fb9c8-7slmm\" (UID: \"914b4688-6153-4282-8828-65d9500a53bf\") " pod="openstack-operators/openstack-operator-controller-operator-6b8c9fb9c8-7slmm" Dec 05 10:38:58 crc kubenswrapper[4796]: I1205 10:38:58.358403 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2p5l\" (UniqueName: \"kubernetes.io/projected/914b4688-6153-4282-8828-65d9500a53bf-kube-api-access-s2p5l\") pod \"openstack-operator-controller-operator-6b8c9fb9c8-7slmm\" (UID: \"914b4688-6153-4282-8828-65d9500a53bf\") " pod="openstack-operators/openstack-operator-controller-operator-6b8c9fb9c8-7slmm" Dec 05 10:38:58 crc kubenswrapper[4796]: I1205 10:38:58.446501 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6b8c9fb9c8-7slmm" Dec 05 10:38:58 crc kubenswrapper[4796]: I1205 10:38:58.787860 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6b8c9fb9c8-7slmm"] Dec 05 10:38:58 crc kubenswrapper[4796]: W1205 10:38:58.790512 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod914b4688_6153_4282_8828_65d9500a53bf.slice/crio-b77133fd76b0f244fe38de4e7478e6a6b8477d65f49ab1239c99e3fa3e04a146 WatchSource:0}: Error finding container b77133fd76b0f244fe38de4e7478e6a6b8477d65f49ab1239c99e3fa3e04a146: Status 404 returned error can't find the container with id b77133fd76b0f244fe38de4e7478e6a6b8477d65f49ab1239c99e3fa3e04a146 Dec 05 10:38:59 crc kubenswrapper[4796]: I1205 10:38:59.352550 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6b8c9fb9c8-7slmm" event={"ID":"914b4688-6153-4282-8828-65d9500a53bf","Type":"ContainerStarted","Data":"b77133fd76b0f244fe38de4e7478e6a6b8477d65f49ab1239c99e3fa3e04a146"} Dec 05 10:39:02 crc kubenswrapper[4796]: I1205 10:39:02.373070 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6b8c9fb9c8-7slmm" event={"ID":"914b4688-6153-4282-8828-65d9500a53bf","Type":"ContainerStarted","Data":"1496a52a199c4fbdf09b607d12d48f5f0df0b9295d21a26f2a981cf54a18ed03"} Dec 05 10:39:04 crc kubenswrapper[4796]: I1205 10:39:04.382753 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6b8c9fb9c8-7slmm" event={"ID":"914b4688-6153-4282-8828-65d9500a53bf","Type":"ContainerStarted","Data":"06130a180f8bd75e5ca9a028c8679f94d6876a3d46272ba2c18f33cb519ecdca"} Dec 05 10:39:04 crc kubenswrapper[4796]: I1205 10:39:04.383429 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-6b8c9fb9c8-7slmm" Dec 05 10:39:04 crc kubenswrapper[4796]: I1205 10:39:04.406206 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-6b8c9fb9c8-7slmm" podStartSLOduration=1.493068306 podStartE2EDuration="6.406193783s" podCreationTimestamp="2025-12-05 10:38:58 +0000 UTC" firstStartedPulling="2025-12-05 10:38:58.792074156 +0000 UTC m=+685.080179670" lastFinishedPulling="2025-12-05 10:39:03.705199635 +0000 UTC m=+689.993305147" observedRunningTime="2025-12-05 10:39:04.402541582 +0000 UTC m=+690.690647095" watchObservedRunningTime="2025-12-05 10:39:04.406193783 +0000 UTC m=+690.694299297" Dec 05 10:39:05 crc kubenswrapper[4796]: I1205 10:39:05.177637 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:39:05 crc kubenswrapper[4796]: I1205 10:39:05.177737 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:39:06 crc kubenswrapper[4796]: I1205 10:39:06.396725 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-6b8c9fb9c8-7slmm" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.012718 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-748df9766b-sv8rl"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.014468 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-748df9766b-sv8rl" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.015727 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b45f74f94-l9pgt"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.016505 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6b45f74f94-l9pgt" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.021652 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-bhbr8" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.025342 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-748df9766b-sv8rl"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.027906 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zgr27" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.031040 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-5644f4c99-b5lst"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.031931 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5644f4c99-b5lst" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.034209 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b45f74f94-l9pgt"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.034217 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-fk6q4" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.044840 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5644f4c99-b5lst"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.055132 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-6f75fb6b58-gz4gq"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.055938 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6f75fb6b58-gz4gq" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.057427 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-xj8qx" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.065313 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6f75fb6b58-gz4gq"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.073665 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-db55fc494-vtkgg"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.074394 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-db55fc494-vtkgg" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.079097 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-ssqx8" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.080554 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-db55fc494-vtkgg"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.095346 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-86b7548d4c-d59d5"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.096224 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-86b7548d4c-d59d5" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.097746 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-fv8w2" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.098065 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-64989647d4-6pkqv"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.098698 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-64989647d4-6pkqv" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.099673 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-c2vwn" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.099805 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.127166 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-64989647d4-6pkqv"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.132156 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-7c55bc5499-tx2js"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.138624 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-7c55bc5499-tx2js" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.144110 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-86b7548d4c-d59d5"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.152029 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-s9qgb" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.169904 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-847b767f55-wqhnd"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.175701 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-847b767f55-wqhnd" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.176116 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-7c55bc5499-tx2js"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.179628 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-6488p" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.199911 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7md92\" (UniqueName: \"kubernetes.io/projected/1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96-kube-api-access-7md92\") pod \"infra-operator-controller-manager-64989647d4-6pkqv\" (UID: \"1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96\") " pod="openstack-operators/infra-operator-controller-manager-64989647d4-6pkqv" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.199956 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhvb7\" (UniqueName: \"kubernetes.io/projected/95efa747-ba05-4c2f-86a8-037452c66764-kube-api-access-fhvb7\") pod \"horizon-operator-controller-manager-86b7548d4c-d59d5\" (UID: \"95efa747-ba05-4c2f-86a8-037452c66764\") " pod="openstack-operators/horizon-operator-controller-manager-86b7548d4c-d59d5" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.199999 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2c76\" (UniqueName: \"kubernetes.io/projected/ecfc5e0d-8538-497e-b578-0ef75e0031db-kube-api-access-d2c76\") pod \"cinder-operator-controller-manager-6b45f74f94-l9pgt\" (UID: \"ecfc5e0d-8538-497e-b578-0ef75e0031db\") " pod="openstack-operators/cinder-operator-controller-manager-6b45f74f94-l9pgt" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.200029 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp4dz\" (UniqueName: \"kubernetes.io/projected/1fe815d0-1127-44a3-8d89-9964b3b5bbc2-kube-api-access-jp4dz\") pod \"barbican-operator-controller-manager-748df9766b-sv8rl\" (UID: \"1fe815d0-1127-44a3-8d89-9964b3b5bbc2\") " pod="openstack-operators/barbican-operator-controller-manager-748df9766b-sv8rl" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.200046 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngt9t\" (UniqueName: \"kubernetes.io/projected/42939bc6-488c-401e-a313-3b5cc9e75f3b-kube-api-access-ngt9t\") pod \"designate-operator-controller-manager-5644f4c99-b5lst\" (UID: \"42939bc6-488c-401e-a313-3b5cc9e75f3b\") " pod="openstack-operators/designate-operator-controller-manager-5644f4c99-b5lst" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.200062 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96-cert\") pod \"infra-operator-controller-manager-64989647d4-6pkqv\" (UID: \"1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96\") " pod="openstack-operators/infra-operator-controller-manager-64989647d4-6pkqv" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.200083 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkr5v\" (UniqueName: \"kubernetes.io/projected/09474b29-37f5-4e66-9314-6af690b94758-kube-api-access-mkr5v\") pod \"heat-operator-controller-manager-db55fc494-vtkgg\" (UID: \"09474b29-37f5-4e66-9314-6af690b94758\") " pod="openstack-operators/heat-operator-controller-manager-db55fc494-vtkgg" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.200106 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg74t\" (UniqueName: \"kubernetes.io/projected/3d773ce3-0d67-4965-b84e-86f922daad38-kube-api-access-jg74t\") pod \"glance-operator-controller-manager-6f75fb6b58-gz4gq\" (UID: \"3d773ce3-0d67-4965-b84e-86f922daad38\") " pod="openstack-operators/glance-operator-controller-manager-6f75fb6b58-gz4gq" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.215360 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-847b767f55-wqhnd"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.223130 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-59c7d85948-v5lcv"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.223993 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-59c7d85948-v5lcv" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.226569 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-9s6jv" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.228721 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66b4f6f898-cqrd7"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.229641 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66b4f6f898-cqrd7" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.232177 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-lxxbz" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.232789 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-59c7d85948-v5lcv"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.236621 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66b4f6f898-cqrd7"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.250245 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7f8bc7fb5-pm9cz"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.251096 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7f8bc7fb5-pm9cz" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.253709 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79b74dfcd4-mhcb5"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.254575 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79b74dfcd4-mhcb5" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.255132 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-xml6z" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.258350 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-6w64j" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.261522 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7f8bc7fb5-pm9cz"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.276875 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79b74dfcd4-mhcb5"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.280875 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6869548bb4-wsr9d"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.281782 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6869548bb4-wsr9d" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.284805 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-jbbk8" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.286663 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6869548bb4-wsr9d"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.290280 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8667b5c969stxrl"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.291177 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8667b5c969stxrl" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.293405 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-nv66s" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.293545 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.294274 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-59f9cfd57b-c586s"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.295031 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-59f9cfd57b-c586s" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.296327 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-fsgkh" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.297140 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8667b5c969stxrl"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.300447 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-589d6b8ccb-h27pk"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.301151 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-589d6b8ccb-h27pk" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.301815 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhvb7\" (UniqueName: \"kubernetes.io/projected/95efa747-ba05-4c2f-86a8-037452c66764-kube-api-access-fhvb7\") pod \"horizon-operator-controller-manager-86b7548d4c-d59d5\" (UID: \"95efa747-ba05-4c2f-86a8-037452c66764\") " pod="openstack-operators/horizon-operator-controller-manager-86b7548d4c-d59d5" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.301849 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8j6g\" (UniqueName: \"kubernetes.io/projected/ba227292-a494-43c0-9fbd-addbd8f48b6f-kube-api-access-q8j6g\") pod \"manila-operator-controller-manager-59c7d85948-v5lcv\" (UID: \"ba227292-a494-43c0-9fbd-addbd8f48b6f\") " pod="openstack-operators/manila-operator-controller-manager-59c7d85948-v5lcv" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.301874 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbpf4\" (UniqueName: \"kubernetes.io/projected/bfe3615a-d5c7-4ad2-9e1e-ac4ab279c64f-kube-api-access-dbpf4\") pod \"neutron-operator-controller-manager-7f8bc7fb5-pm9cz\" (UID: \"bfe3615a-d5c7-4ad2-9e1e-ac4ab279c64f\") " pod="openstack-operators/neutron-operator-controller-manager-7f8bc7fb5-pm9cz" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.301889 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25vzt\" (UniqueName: \"kubernetes.io/projected/8e886ef4-4f20-49e6-93d8-d011ac192923-kube-api-access-25vzt\") pod \"keystone-operator-controller-manager-847b767f55-wqhnd\" (UID: \"8e886ef4-4f20-49e6-93d8-d011ac192923\") " pod="openstack-operators/keystone-operator-controller-manager-847b767f55-wqhnd" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.301920 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkf78\" (UniqueName: \"kubernetes.io/projected/f92cf54c-1bcd-4a73-86b2-e4407908953d-kube-api-access-rkf78\") pod \"mariadb-operator-controller-manager-66b4f6f898-cqrd7\" (UID: \"f92cf54c-1bcd-4a73-86b2-e4407908953d\") " pod="openstack-operators/mariadb-operator-controller-manager-66b4f6f898-cqrd7" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.301940 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2c76\" (UniqueName: \"kubernetes.io/projected/ecfc5e0d-8538-497e-b578-0ef75e0031db-kube-api-access-d2c76\") pod \"cinder-operator-controller-manager-6b45f74f94-l9pgt\" (UID: \"ecfc5e0d-8538-497e-b578-0ef75e0031db\") " pod="openstack-operators/cinder-operator-controller-manager-6b45f74f94-l9pgt" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.301959 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vjwf\" (UniqueName: \"kubernetes.io/projected/845825d1-623f-4e06-9f2c-d045910eee1a-kube-api-access-8vjwf\") pod \"octavia-operator-controller-manager-6869548bb4-wsr9d\" (UID: \"845825d1-623f-4e06-9f2c-d045910eee1a\") " pod="openstack-operators/octavia-operator-controller-manager-6869548bb4-wsr9d" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.301978 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp4dz\" (UniqueName: \"kubernetes.io/projected/1fe815d0-1127-44a3-8d89-9964b3b5bbc2-kube-api-access-jp4dz\") pod \"barbican-operator-controller-manager-748df9766b-sv8rl\" (UID: \"1fe815d0-1127-44a3-8d89-9964b3b5bbc2\") " pod="openstack-operators/barbican-operator-controller-manager-748df9766b-sv8rl" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.301993 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngt9t\" (UniqueName: \"kubernetes.io/projected/42939bc6-488c-401e-a313-3b5cc9e75f3b-kube-api-access-ngt9t\") pod \"designate-operator-controller-manager-5644f4c99-b5lst\" (UID: \"42939bc6-488c-401e-a313-3b5cc9e75f3b\") " pod="openstack-operators/designate-operator-controller-manager-5644f4c99-b5lst" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.302008 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhnkc\" (UniqueName: \"kubernetes.io/projected/b566972c-0250-4692-8152-31dc732b4147-kube-api-access-dhnkc\") pod \"placement-operator-controller-manager-589d6b8ccb-h27pk\" (UID: \"b566972c-0250-4692-8152-31dc732b4147\") " pod="openstack-operators/placement-operator-controller-manager-589d6b8ccb-h27pk" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.302038 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qp2p\" (UniqueName: \"kubernetes.io/projected/91dc33f2-985f-41d9-8c36-4c37aed1ec16-kube-api-access-7qp2p\") pod \"nova-operator-controller-manager-79b74dfcd4-mhcb5\" (UID: \"91dc33f2-985f-41d9-8c36-4c37aed1ec16\") " pod="openstack-operators/nova-operator-controller-manager-79b74dfcd4-mhcb5" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.302053 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj8d6\" (UniqueName: \"kubernetes.io/projected/a1fec903-f9b8-49e1-a4f0-1526dcff64ea-kube-api-access-nj8d6\") pod \"ovn-operator-controller-manager-59f9cfd57b-c586s\" (UID: \"a1fec903-f9b8-49e1-a4f0-1526dcff64ea\") " pod="openstack-operators/ovn-operator-controller-manager-59f9cfd57b-c586s" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.302071 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96-cert\") pod \"infra-operator-controller-manager-64989647d4-6pkqv\" (UID: \"1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96\") " pod="openstack-operators/infra-operator-controller-manager-64989647d4-6pkqv" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.302089 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkr5v\" (UniqueName: \"kubernetes.io/projected/09474b29-37f5-4e66-9314-6af690b94758-kube-api-access-mkr5v\") pod \"heat-operator-controller-manager-db55fc494-vtkgg\" (UID: \"09474b29-37f5-4e66-9314-6af690b94758\") " pod="openstack-operators/heat-operator-controller-manager-db55fc494-vtkgg" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.302108 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhck8\" (UniqueName: \"kubernetes.io/projected/98b514bf-2dd0-4d60-9141-d70dead159cb-kube-api-access-mhck8\") pod \"ironic-operator-controller-manager-7c55bc5499-tx2js\" (UID: \"98b514bf-2dd0-4d60-9141-d70dead159cb\") " pod="openstack-operators/ironic-operator-controller-manager-7c55bc5499-tx2js" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.302130 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg74t\" (UniqueName: \"kubernetes.io/projected/3d773ce3-0d67-4965-b84e-86f922daad38-kube-api-access-jg74t\") pod \"glance-operator-controller-manager-6f75fb6b58-gz4gq\" (UID: \"3d773ce3-0d67-4965-b84e-86f922daad38\") " pod="openstack-operators/glance-operator-controller-manager-6f75fb6b58-gz4gq" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.302160 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9314b19e-3947-4091-af58-82275f696602-cert\") pod \"openstack-baremetal-operator-controller-manager-8667b5c969stxrl\" (UID: \"9314b19e-3947-4091-af58-82275f696602\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8667b5c969stxrl" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.302176 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gttrx\" (UniqueName: \"kubernetes.io/projected/9314b19e-3947-4091-af58-82275f696602-kube-api-access-gttrx\") pod \"openstack-baremetal-operator-controller-manager-8667b5c969stxrl\" (UID: \"9314b19e-3947-4091-af58-82275f696602\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8667b5c969stxrl" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.302197 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7md92\" (UniqueName: \"kubernetes.io/projected/1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96-kube-api-access-7md92\") pod \"infra-operator-controller-manager-64989647d4-6pkqv\" (UID: \"1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96\") " pod="openstack-operators/infra-operator-controller-manager-64989647d4-6pkqv" Dec 05 10:39:23 crc kubenswrapper[4796]: E1205 10:39:23.302657 4796 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 05 10:39:23 crc kubenswrapper[4796]: E1205 10:39:23.302782 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96-cert podName:1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96 nodeName:}" failed. No retries permitted until 2025-12-05 10:39:23.802767785 +0000 UTC m=+710.090873299 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96-cert") pod "infra-operator-controller-manager-64989647d4-6pkqv" (UID: "1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96") : secret "infra-operator-webhook-server-cert" not found Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.305608 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-7m56g" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.306385 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-59f9cfd57b-c586s"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.310290 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-589d6b8ccb-h27pk"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.323599 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp4dz\" (UniqueName: \"kubernetes.io/projected/1fe815d0-1127-44a3-8d89-9964b3b5bbc2-kube-api-access-jp4dz\") pod \"barbican-operator-controller-manager-748df9766b-sv8rl\" (UID: \"1fe815d0-1127-44a3-8d89-9964b3b5bbc2\") " pod="openstack-operators/barbican-operator-controller-manager-748df9766b-sv8rl" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.324130 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7md92\" (UniqueName: \"kubernetes.io/projected/1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96-kube-api-access-7md92\") pod \"infra-operator-controller-manager-64989647d4-6pkqv\" (UID: \"1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96\") " pod="openstack-operators/infra-operator-controller-manager-64989647d4-6pkqv" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.324363 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhvb7\" (UniqueName: \"kubernetes.io/projected/95efa747-ba05-4c2f-86a8-037452c66764-kube-api-access-fhvb7\") pod \"horizon-operator-controller-manager-86b7548d4c-d59d5\" (UID: \"95efa747-ba05-4c2f-86a8-037452c66764\") " pod="openstack-operators/horizon-operator-controller-manager-86b7548d4c-d59d5" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.327013 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkr5v\" (UniqueName: \"kubernetes.io/projected/09474b29-37f5-4e66-9314-6af690b94758-kube-api-access-mkr5v\") pod \"heat-operator-controller-manager-db55fc494-vtkgg\" (UID: \"09474b29-37f5-4e66-9314-6af690b94758\") " pod="openstack-operators/heat-operator-controller-manager-db55fc494-vtkgg" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.327393 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg74t\" (UniqueName: \"kubernetes.io/projected/3d773ce3-0d67-4965-b84e-86f922daad38-kube-api-access-jg74t\") pod \"glance-operator-controller-manager-6f75fb6b58-gz4gq\" (UID: \"3d773ce3-0d67-4965-b84e-86f922daad38\") " pod="openstack-operators/glance-operator-controller-manager-6f75fb6b58-gz4gq" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.327601 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-748df9766b-sv8rl" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.330110 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngt9t\" (UniqueName: \"kubernetes.io/projected/42939bc6-488c-401e-a313-3b5cc9e75f3b-kube-api-access-ngt9t\") pod \"designate-operator-controller-manager-5644f4c99-b5lst\" (UID: \"42939bc6-488c-401e-a313-3b5cc9e75f3b\") " pod="openstack-operators/designate-operator-controller-manager-5644f4c99-b5lst" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.330181 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2c76\" (UniqueName: \"kubernetes.io/projected/ecfc5e0d-8538-497e-b578-0ef75e0031db-kube-api-access-d2c76\") pod \"cinder-operator-controller-manager-6b45f74f94-l9pgt\" (UID: \"ecfc5e0d-8538-497e-b578-0ef75e0031db\") " pod="openstack-operators/cinder-operator-controller-manager-6b45f74f94-l9pgt" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.338480 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6b45f74f94-l9pgt" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.341879 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5bf496986d-rfkkm"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.342787 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5bf496986d-rfkkm" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.344105 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-bd5t2" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.345549 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5bf496986d-rfkkm"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.345638 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5644f4c99-b5lst" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.369722 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6f75fb6b58-gz4gq" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.390230 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-db55fc494-vtkgg" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.404071 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhck8\" (UniqueName: \"kubernetes.io/projected/98b514bf-2dd0-4d60-9141-d70dead159cb-kube-api-access-mhck8\") pod \"ironic-operator-controller-manager-7c55bc5499-tx2js\" (UID: \"98b514bf-2dd0-4d60-9141-d70dead159cb\") " pod="openstack-operators/ironic-operator-controller-manager-7c55bc5499-tx2js" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.404119 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptpg6\" (UniqueName: \"kubernetes.io/projected/e83951e6-5692-458d-aeba-ae8e6e8cfdd5-kube-api-access-ptpg6\") pod \"swift-operator-controller-manager-5bf496986d-rfkkm\" (UID: \"e83951e6-5692-458d-aeba-ae8e6e8cfdd5\") " pod="openstack-operators/swift-operator-controller-manager-5bf496986d-rfkkm" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.404142 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9314b19e-3947-4091-af58-82275f696602-cert\") pod \"openstack-baremetal-operator-controller-manager-8667b5c969stxrl\" (UID: \"9314b19e-3947-4091-af58-82275f696602\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8667b5c969stxrl" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.404157 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gttrx\" (UniqueName: \"kubernetes.io/projected/9314b19e-3947-4091-af58-82275f696602-kube-api-access-gttrx\") pod \"openstack-baremetal-operator-controller-manager-8667b5c969stxrl\" (UID: \"9314b19e-3947-4091-af58-82275f696602\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8667b5c969stxrl" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.404188 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8j6g\" (UniqueName: \"kubernetes.io/projected/ba227292-a494-43c0-9fbd-addbd8f48b6f-kube-api-access-q8j6g\") pod \"manila-operator-controller-manager-59c7d85948-v5lcv\" (UID: \"ba227292-a494-43c0-9fbd-addbd8f48b6f\") " pod="openstack-operators/manila-operator-controller-manager-59c7d85948-v5lcv" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.404207 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbpf4\" (UniqueName: \"kubernetes.io/projected/bfe3615a-d5c7-4ad2-9e1e-ac4ab279c64f-kube-api-access-dbpf4\") pod \"neutron-operator-controller-manager-7f8bc7fb5-pm9cz\" (UID: \"bfe3615a-d5c7-4ad2-9e1e-ac4ab279c64f\") " pod="openstack-operators/neutron-operator-controller-manager-7f8bc7fb5-pm9cz" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.404223 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25vzt\" (UniqueName: \"kubernetes.io/projected/8e886ef4-4f20-49e6-93d8-d011ac192923-kube-api-access-25vzt\") pod \"keystone-operator-controller-manager-847b767f55-wqhnd\" (UID: \"8e886ef4-4f20-49e6-93d8-d011ac192923\") " pod="openstack-operators/keystone-operator-controller-manager-847b767f55-wqhnd" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.404253 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkf78\" (UniqueName: \"kubernetes.io/projected/f92cf54c-1bcd-4a73-86b2-e4407908953d-kube-api-access-rkf78\") pod \"mariadb-operator-controller-manager-66b4f6f898-cqrd7\" (UID: \"f92cf54c-1bcd-4a73-86b2-e4407908953d\") " pod="openstack-operators/mariadb-operator-controller-manager-66b4f6f898-cqrd7" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.404272 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vjwf\" (UniqueName: \"kubernetes.io/projected/845825d1-623f-4e06-9f2c-d045910eee1a-kube-api-access-8vjwf\") pod \"octavia-operator-controller-manager-6869548bb4-wsr9d\" (UID: \"845825d1-623f-4e06-9f2c-d045910eee1a\") " pod="openstack-operators/octavia-operator-controller-manager-6869548bb4-wsr9d" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.404291 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhnkc\" (UniqueName: \"kubernetes.io/projected/b566972c-0250-4692-8152-31dc732b4147-kube-api-access-dhnkc\") pod \"placement-operator-controller-manager-589d6b8ccb-h27pk\" (UID: \"b566972c-0250-4692-8152-31dc732b4147\") " pod="openstack-operators/placement-operator-controller-manager-589d6b8ccb-h27pk" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.404305 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj8d6\" (UniqueName: \"kubernetes.io/projected/a1fec903-f9b8-49e1-a4f0-1526dcff64ea-kube-api-access-nj8d6\") pod \"ovn-operator-controller-manager-59f9cfd57b-c586s\" (UID: \"a1fec903-f9b8-49e1-a4f0-1526dcff64ea\") " pod="openstack-operators/ovn-operator-controller-manager-59f9cfd57b-c586s" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.404319 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qp2p\" (UniqueName: \"kubernetes.io/projected/91dc33f2-985f-41d9-8c36-4c37aed1ec16-kube-api-access-7qp2p\") pod \"nova-operator-controller-manager-79b74dfcd4-mhcb5\" (UID: \"91dc33f2-985f-41d9-8c36-4c37aed1ec16\") " pod="openstack-operators/nova-operator-controller-manager-79b74dfcd4-mhcb5" Dec 05 10:39:23 crc kubenswrapper[4796]: E1205 10:39:23.404871 4796 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 10:39:23 crc kubenswrapper[4796]: E1205 10:39:23.404920 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9314b19e-3947-4091-af58-82275f696602-cert podName:9314b19e-3947-4091-af58-82275f696602 nodeName:}" failed. No retries permitted until 2025-12-05 10:39:23.904907342 +0000 UTC m=+710.193012845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9314b19e-3947-4091-af58-82275f696602-cert") pod "openstack-baremetal-operator-controller-manager-8667b5c969stxrl" (UID: "9314b19e-3947-4091-af58-82275f696602") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.418628 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-75f49469b9-rs7fq"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.421617 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qp2p\" (UniqueName: \"kubernetes.io/projected/91dc33f2-985f-41d9-8c36-4c37aed1ec16-kube-api-access-7qp2p\") pod \"nova-operator-controller-manager-79b74dfcd4-mhcb5\" (UID: \"91dc33f2-985f-41d9-8c36-4c37aed1ec16\") " pod="openstack-operators/nova-operator-controller-manager-79b74dfcd4-mhcb5" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.424929 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8j6g\" (UniqueName: \"kubernetes.io/projected/ba227292-a494-43c0-9fbd-addbd8f48b6f-kube-api-access-q8j6g\") pod \"manila-operator-controller-manager-59c7d85948-v5lcv\" (UID: \"ba227292-a494-43c0-9fbd-addbd8f48b6f\") " pod="openstack-operators/manila-operator-controller-manager-59c7d85948-v5lcv" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.424942 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-75f49469b9-rs7fq" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.426379 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbpf4\" (UniqueName: \"kubernetes.io/projected/bfe3615a-d5c7-4ad2-9e1e-ac4ab279c64f-kube-api-access-dbpf4\") pod \"neutron-operator-controller-manager-7f8bc7fb5-pm9cz\" (UID: \"bfe3615a-d5c7-4ad2-9e1e-ac4ab279c64f\") " pod="openstack-operators/neutron-operator-controller-manager-7f8bc7fb5-pm9cz" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.426772 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-86b7548d4c-d59d5" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.429869 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhck8\" (UniqueName: \"kubernetes.io/projected/98b514bf-2dd0-4d60-9141-d70dead159cb-kube-api-access-mhck8\") pod \"ironic-operator-controller-manager-7c55bc5499-tx2js\" (UID: \"98b514bf-2dd0-4d60-9141-d70dead159cb\") " pod="openstack-operators/ironic-operator-controller-manager-7c55bc5499-tx2js" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.432872 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-njxcj" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.433869 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj8d6\" (UniqueName: \"kubernetes.io/projected/a1fec903-f9b8-49e1-a4f0-1526dcff64ea-kube-api-access-nj8d6\") pod \"ovn-operator-controller-manager-59f9cfd57b-c586s\" (UID: \"a1fec903-f9b8-49e1-a4f0-1526dcff64ea\") " pod="openstack-operators/ovn-operator-controller-manager-59f9cfd57b-c586s" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.436140 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkf78\" (UniqueName: \"kubernetes.io/projected/f92cf54c-1bcd-4a73-86b2-e4407908953d-kube-api-access-rkf78\") pod \"mariadb-operator-controller-manager-66b4f6f898-cqrd7\" (UID: \"f92cf54c-1bcd-4a73-86b2-e4407908953d\") " pod="openstack-operators/mariadb-operator-controller-manager-66b4f6f898-cqrd7" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.448105 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25vzt\" (UniqueName: \"kubernetes.io/projected/8e886ef4-4f20-49e6-93d8-d011ac192923-kube-api-access-25vzt\") pod \"keystone-operator-controller-manager-847b767f55-wqhnd\" (UID: \"8e886ef4-4f20-49e6-93d8-d011ac192923\") " pod="openstack-operators/keystone-operator-controller-manager-847b767f55-wqhnd" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.449119 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gttrx\" (UniqueName: \"kubernetes.io/projected/9314b19e-3947-4091-af58-82275f696602-kube-api-access-gttrx\") pod \"openstack-baremetal-operator-controller-manager-8667b5c969stxrl\" (UID: \"9314b19e-3947-4091-af58-82275f696602\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8667b5c969stxrl" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.449261 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-75f49469b9-rs7fq"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.453582 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vjwf\" (UniqueName: \"kubernetes.io/projected/845825d1-623f-4e06-9f2c-d045910eee1a-kube-api-access-8vjwf\") pod \"octavia-operator-controller-manager-6869548bb4-wsr9d\" (UID: \"845825d1-623f-4e06-9f2c-d045910eee1a\") " pod="openstack-operators/octavia-operator-controller-manager-6869548bb4-wsr9d" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.455828 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhnkc\" (UniqueName: \"kubernetes.io/projected/b566972c-0250-4692-8152-31dc732b4147-kube-api-access-dhnkc\") pod \"placement-operator-controller-manager-589d6b8ccb-h27pk\" (UID: \"b566972c-0250-4692-8152-31dc732b4147\") " pod="openstack-operators/placement-operator-controller-manager-589d6b8ccb-h27pk" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.477527 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-7c55bc5499-tx2js" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.492266 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-847b767f55-wqhnd" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.510528 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf7ft\" (UniqueName: \"kubernetes.io/projected/622bf26a-5bd4-4936-bd06-ae5ec514f130-kube-api-access-kf7ft\") pod \"telemetry-operator-controller-manager-75f49469b9-rs7fq\" (UID: \"622bf26a-5bd4-4936-bd06-ae5ec514f130\") " pod="openstack-operators/telemetry-operator-controller-manager-75f49469b9-rs7fq" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.510906 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptpg6\" (UniqueName: \"kubernetes.io/projected/e83951e6-5692-458d-aeba-ae8e6e8cfdd5-kube-api-access-ptpg6\") pod \"swift-operator-controller-manager-5bf496986d-rfkkm\" (UID: \"e83951e6-5692-458d-aeba-ae8e6e8cfdd5\") " pod="openstack-operators/swift-operator-controller-manager-5bf496986d-rfkkm" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.517916 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd78796cb-gcttm"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.518840 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd78796cb-gcttm" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.524573 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-bl5z9" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.530014 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptpg6\" (UniqueName: \"kubernetes.io/projected/e83951e6-5692-458d-aeba-ae8e6e8cfdd5-kube-api-access-ptpg6\") pod \"swift-operator-controller-manager-5bf496986d-rfkkm\" (UID: \"e83951e6-5692-458d-aeba-ae8e6e8cfdd5\") " pod="openstack-operators/swift-operator-controller-manager-5bf496986d-rfkkm" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.530757 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd78796cb-gcttm"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.539507 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-59c7d85948-v5lcv" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.557955 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66b4f6f898-cqrd7" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.567066 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7f8bc7fb5-pm9cz" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.571664 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79b74dfcd4-mhcb5" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.593645 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6869548bb4-wsr9d" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.613511 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kscrf\" (UniqueName: \"kubernetes.io/projected/aa5e433c-e704-4cbf-8db3-6efe20814f65-kube-api-access-kscrf\") pod \"test-operator-controller-manager-7cd78796cb-gcttm\" (UID: \"aa5e433c-e704-4cbf-8db3-6efe20814f65\") " pod="openstack-operators/test-operator-controller-manager-7cd78796cb-gcttm" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.613551 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf7ft\" (UniqueName: \"kubernetes.io/projected/622bf26a-5bd4-4936-bd06-ae5ec514f130-kube-api-access-kf7ft\") pod \"telemetry-operator-controller-manager-75f49469b9-rs7fq\" (UID: \"622bf26a-5bd4-4936-bd06-ae5ec514f130\") " pod="openstack-operators/telemetry-operator-controller-manager-75f49469b9-rs7fq" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.617099 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-784c978c5-v2fgq"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.618461 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-784c978c5-v2fgq" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.619015 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-59f9cfd57b-c586s" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.620619 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-qndbf" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.635099 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-784c978c5-v2fgq"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.646888 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf7ft\" (UniqueName: \"kubernetes.io/projected/622bf26a-5bd4-4936-bd06-ae5ec514f130-kube-api-access-kf7ft\") pod \"telemetry-operator-controller-manager-75f49469b9-rs7fq\" (UID: \"622bf26a-5bd4-4936-bd06-ae5ec514f130\") " pod="openstack-operators/telemetry-operator-controller-manager-75f49469b9-rs7fq" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.706329 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-589d6b8ccb-h27pk" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.714575 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kscrf\" (UniqueName: \"kubernetes.io/projected/aa5e433c-e704-4cbf-8db3-6efe20814f65-kube-api-access-kscrf\") pod \"test-operator-controller-manager-7cd78796cb-gcttm\" (UID: \"aa5e433c-e704-4cbf-8db3-6efe20814f65\") " pod="openstack-operators/test-operator-controller-manager-7cd78796cb-gcttm" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.729975 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5bf496986d-rfkkm" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.733583 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-9989f4965-wmbfg"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.733605 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kscrf\" (UniqueName: \"kubernetes.io/projected/aa5e433c-e704-4cbf-8db3-6efe20814f65-kube-api-access-kscrf\") pod \"test-operator-controller-manager-7cd78796cb-gcttm\" (UID: \"aa5e433c-e704-4cbf-8db3-6efe20814f65\") " pod="openstack-operators/test-operator-controller-manager-7cd78796cb-gcttm" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.734555 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-9989f4965-wmbfg" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.738758 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-2p8rh" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.739009 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.746060 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-75f49469b9-rs7fq" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.755942 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-9989f4965-wmbfg"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.815571 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96-cert\") pod \"infra-operator-controller-manager-64989647d4-6pkqv\" (UID: \"1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96\") " pod="openstack-operators/infra-operator-controller-manager-64989647d4-6pkqv" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.815636 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tff5z\" (UniqueName: \"kubernetes.io/projected/a9f4475f-8ecc-4bc3-a195-e5cf592a1324-kube-api-access-tff5z\") pod \"watcher-operator-controller-manager-784c978c5-v2fgq\" (UID: \"a9f4475f-8ecc-4bc3-a195-e5cf592a1324\") " pod="openstack-operators/watcher-operator-controller-manager-784c978c5-v2fgq" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.826226 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mlsnn"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.826977 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mlsnn" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.831139 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-cp425" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.836257 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96-cert\") pod \"infra-operator-controller-manager-64989647d4-6pkqv\" (UID: \"1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96\") " pod="openstack-operators/infra-operator-controller-manager-64989647d4-6pkqv" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.842531 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mlsnn"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.842881 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd78796cb-gcttm" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.859303 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b45f74f94-l9pgt"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.876995 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-748df9766b-sv8rl"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.916897 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b9a78df2-ecf6-425a-ace1-f005622e0025-cert\") pod \"openstack-operator-controller-manager-9989f4965-wmbfg\" (UID: \"b9a78df2-ecf6-425a-ace1-f005622e0025\") " pod="openstack-operators/openstack-operator-controller-manager-9989f4965-wmbfg" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.916946 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tff5z\" (UniqueName: \"kubernetes.io/projected/a9f4475f-8ecc-4bc3-a195-e5cf592a1324-kube-api-access-tff5z\") pod \"watcher-operator-controller-manager-784c978c5-v2fgq\" (UID: \"a9f4475f-8ecc-4bc3-a195-e5cf592a1324\") " pod="openstack-operators/watcher-operator-controller-manager-784c978c5-v2fgq" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.916987 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9314b19e-3947-4091-af58-82275f696602-cert\") pod \"openstack-baremetal-operator-controller-manager-8667b5c969stxrl\" (UID: \"9314b19e-3947-4091-af58-82275f696602\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8667b5c969stxrl" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.917006 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r89rc\" (UniqueName: \"kubernetes.io/projected/b9a78df2-ecf6-425a-ace1-f005622e0025-kube-api-access-r89rc\") pod \"openstack-operator-controller-manager-9989f4965-wmbfg\" (UID: \"b9a78df2-ecf6-425a-ace1-f005622e0025\") " pod="openstack-operators/openstack-operator-controller-manager-9989f4965-wmbfg" Dec 05 10:39:23 crc kubenswrapper[4796]: E1205 10:39:23.917849 4796 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 10:39:23 crc kubenswrapper[4796]: E1205 10:39:23.917895 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9314b19e-3947-4091-af58-82275f696602-cert podName:9314b19e-3947-4091-af58-82275f696602 nodeName:}" failed. No retries permitted until 2025-12-05 10:39:24.917880406 +0000 UTC m=+711.205985920 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9314b19e-3947-4091-af58-82275f696602-cert") pod "openstack-baremetal-operator-controller-manager-8667b5c969stxrl" (UID: "9314b19e-3947-4091-af58-82275f696602") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.924943 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5644f4c99-b5lst"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.934911 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tff5z\" (UniqueName: \"kubernetes.io/projected/a9f4475f-8ecc-4bc3-a195-e5cf592a1324-kube-api-access-tff5z\") pod \"watcher-operator-controller-manager-784c978c5-v2fgq\" (UID: \"a9f4475f-8ecc-4bc3-a195-e5cf592a1324\") " pod="openstack-operators/watcher-operator-controller-manager-784c978c5-v2fgq" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.958998 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-784c978c5-v2fgq" Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.959956 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6f75fb6b58-gz4gq"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.968585 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-86b7548d4c-d59d5"] Dec 05 10:39:23 crc kubenswrapper[4796]: I1205 10:39:23.983518 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-db55fc494-vtkgg"] Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.018641 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b9a78df2-ecf6-425a-ace1-f005622e0025-cert\") pod \"openstack-operator-controller-manager-9989f4965-wmbfg\" (UID: \"b9a78df2-ecf6-425a-ace1-f005622e0025\") " pod="openstack-operators/openstack-operator-controller-manager-9989f4965-wmbfg" Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.018739 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r89rc\" (UniqueName: \"kubernetes.io/projected/b9a78df2-ecf6-425a-ace1-f005622e0025-kube-api-access-r89rc\") pod \"openstack-operator-controller-manager-9989f4965-wmbfg\" (UID: \"b9a78df2-ecf6-425a-ace1-f005622e0025\") " pod="openstack-operators/openstack-operator-controller-manager-9989f4965-wmbfg" Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.018782 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbht6\" (UniqueName: \"kubernetes.io/projected/ae34e165-a87d-4395-99c5-1a9f7129e6fe-kube-api-access-hbht6\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-mlsnn\" (UID: \"ae34e165-a87d-4395-99c5-1a9f7129e6fe\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mlsnn" Dec 05 10:39:24 crc kubenswrapper[4796]: E1205 10:39:24.018916 4796 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 05 10:39:24 crc kubenswrapper[4796]: E1205 10:39:24.018955 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9a78df2-ecf6-425a-ace1-f005622e0025-cert podName:b9a78df2-ecf6-425a-ace1-f005622e0025 nodeName:}" failed. No retries permitted until 2025-12-05 10:39:24.518939922 +0000 UTC m=+710.807045434 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b9a78df2-ecf6-425a-ace1-f005622e0025-cert") pod "openstack-operator-controller-manager-9989f4965-wmbfg" (UID: "b9a78df2-ecf6-425a-ace1-f005622e0025") : secret "webhook-server-cert" not found Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.032949 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r89rc\" (UniqueName: \"kubernetes.io/projected/b9a78df2-ecf6-425a-ace1-f005622e0025-kube-api-access-r89rc\") pod \"openstack-operator-controller-manager-9989f4965-wmbfg\" (UID: \"b9a78df2-ecf6-425a-ace1-f005622e0025\") " pod="openstack-operators/openstack-operator-controller-manager-9989f4965-wmbfg" Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.032984 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-64989647d4-6pkqv" Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.075828 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-7c55bc5499-tx2js"] Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.086320 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-847b767f55-wqhnd"] Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.125855 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbht6\" (UniqueName: \"kubernetes.io/projected/ae34e165-a87d-4395-99c5-1a9f7129e6fe-kube-api-access-hbht6\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-mlsnn\" (UID: \"ae34e165-a87d-4395-99c5-1a9f7129e6fe\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mlsnn" Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.143662 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbht6\" (UniqueName: \"kubernetes.io/projected/ae34e165-a87d-4395-99c5-1a9f7129e6fe-kube-api-access-hbht6\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-mlsnn\" (UID: \"ae34e165-a87d-4395-99c5-1a9f7129e6fe\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mlsnn" Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.151463 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mlsnn" Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.258547 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7f8bc7fb5-pm9cz"] Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.274975 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-59c7d85948-v5lcv"] Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.290600 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66b4f6f898-cqrd7"] Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.400546 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6869548bb4-wsr9d"] Dec 05 10:39:24 crc kubenswrapper[4796]: W1205 10:39:24.404792 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod845825d1_623f_4e06_9f2c_d045910eee1a.slice/crio-f8f50f6b8145e094714c8c7c146c555635429434585e9a1bd688213b7d973bd2 WatchSource:0}: Error finding container f8f50f6b8145e094714c8c7c146c555635429434585e9a1bd688213b7d973bd2: Status 404 returned error can't find the container with id f8f50f6b8145e094714c8c7c146c555635429434585e9a1bd688213b7d973bd2 Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.407258 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5bf496986d-rfkkm"] Dec 05 10:39:24 crc kubenswrapper[4796]: E1205 10:39:24.422312 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:f076b8d9e85881d9c3cb5272b13db7f5e05d2e9da884c17b677a844112831907,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ptpg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5bf496986d-rfkkm_openstack-operators(e83951e6-5692-458d-aeba-ae8e6e8cfdd5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.424454 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-64989647d4-6pkqv"] Dec 05 10:39:24 crc kubenswrapper[4796]: W1205 10:39:24.433397 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b8f7cce_f7de_4fc0_a672_3ce06b1bdf96.slice/crio-8db07cba946c59ab3385c6ef26ff4f94fe8d08edef3684cfd12769aca5d51e08 WatchSource:0}: Error finding container 8db07cba946c59ab3385c6ef26ff4f94fe8d08edef3684cfd12769aca5d51e08: Status 404 returned error can't find the container with id 8db07cba946c59ab3385c6ef26ff4f94fe8d08edef3684cfd12769aca5d51e08 Dec 05 10:39:24 crc kubenswrapper[4796]: E1205 10:39:24.442765 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:e1a731922a2da70b224ce5396602a07cec2b4a79efe7bcdc17c5e4509d16b5e4,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7md92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-64989647d4-6pkqv_openstack-operators(1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.451663 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-75f49469b9-rs7fq"] Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.467281 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-784c978c5-v2fgq"] Dec 05 10:39:24 crc kubenswrapper[4796]: W1205 10:39:24.473364 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9f4475f_8ecc_4bc3_a195_e5cf592a1324.slice/crio-b6411bcbcd7c0520d655077208b57569d0236f4a97d298fb363fa32bc65212bf WatchSource:0}: Error finding container b6411bcbcd7c0520d655077208b57569d0236f4a97d298fb363fa32bc65212bf: Status 404 returned error can't find the container with id b6411bcbcd7c0520d655077208b57569d0236f4a97d298fb363fa32bc65212bf Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.476091 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mlsnn"] Dec 05 10:39:24 crc kubenswrapper[4796]: E1205 10:39:24.482559 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:1988aaf9cd245150cda123aaaa21718ccb552c47f1623b7d68804f13c47f2c6a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tff5z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-784c978c5-v2fgq_openstack-operators(a9f4475f-8ecc-4bc3-a195-e5cf592a1324): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 10:39:24 crc kubenswrapper[4796]: E1205 10:39:24.483992 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:78d91c3cdd5eda41c2cd6d4a8491844e161dc33f6221be8cb822b2107d7ff46f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kf7ft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-75f49469b9-rs7fq_openstack-operators(622bf26a-5bd4-4936-bd06-ae5ec514f130): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.484827 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-7c55bc5499-tx2js" event={"ID":"98b514bf-2dd0-4d60-9141-d70dead159cb","Type":"ContainerStarted","Data":"0a038b923d5a3e990e66bf21064bc73c1ef2bdfe73c06c1fa6e43de94afaa1f6"} Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.488015 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-59f9cfd57b-c586s"] Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.488972 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6869548bb4-wsr9d" event={"ID":"845825d1-623f-4e06-9f2c-d045910eee1a","Type":"ContainerStarted","Data":"f8f50f6b8145e094714c8c7c146c555635429434585e9a1bd688213b7d973bd2"} Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.494590 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79b74dfcd4-mhcb5"] Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.500197 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-589d6b8ccb-h27pk"] Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.500556 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-db55fc494-vtkgg" event={"ID":"09474b29-37f5-4e66-9314-6af690b94758","Type":"ContainerStarted","Data":"5ad648e56b5b5ad93e63884d8b22ec6e5222ab497a63555ec487e6ebeb87a9e4"} Dec 05 10:39:24 crc kubenswrapper[4796]: E1205 10:39:24.501295 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hbht6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-mlsnn_openstack-operators(ae34e165-a87d-4395-99c5-1a9f7129e6fe): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 10:39:24 crc kubenswrapper[4796]: E1205 10:39:24.501412 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:debe5d6d29a007374b270b0e114e69b2136eee61dabab8576baf4010c951edb9,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7qp2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79b74dfcd4-mhcb5_openstack-operators(91dc33f2-985f-41d9-8c36-4c37aed1ec16): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 10:39:24 crc kubenswrapper[4796]: E1205 10:39:24.502396 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mlsnn" podUID="ae34e165-a87d-4395-99c5-1a9f7129e6fe" Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.504330 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd78796cb-gcttm"] Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.505422 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5644f4c99-b5lst" event={"ID":"42939bc6-488c-401e-a313-3b5cc9e75f3b","Type":"ContainerStarted","Data":"efa719c977b22bfe68b4896f442947faaa468012ec094d1290e5b3459c56c871"} Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.510182 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-59c7d85948-v5lcv" event={"ID":"ba227292-a494-43c0-9fbd-addbd8f48b6f","Type":"ContainerStarted","Data":"7aafaf3e8683dea58bf52ddc64f77cf0ebee4777cc333275149e2c35842a3c3b"} Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.512964 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-847b767f55-wqhnd" event={"ID":"8e886ef4-4f20-49e6-93d8-d011ac192923","Type":"ContainerStarted","Data":"27b6a0bcf1153e4b324ef30147b4c0798cc7292cf1aa5717f0c4687a81ed720c"} Dec 05 10:39:24 crc kubenswrapper[4796]: E1205 10:39:24.514676 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:49180c7bd4f0071e43ae7044260a3a97c4aa34fcbcb2d0d4573df449765ed391,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kscrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd78796cb-gcttm_openstack-operators(aa5e433c-e704-4cbf-8db3-6efe20814f65): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 10:39:24 crc kubenswrapper[4796]: E1205 10:39:24.514715 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:fd917de0cf800ec284ee0c3f2906a06d85ea18cb75a5b06c8eb305750467986d,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dhnkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-589d6b8ccb-h27pk_openstack-operators(b566972c-0250-4692-8152-31dc732b4147): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.514864 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5bf496986d-rfkkm" event={"ID":"e83951e6-5692-458d-aeba-ae8e6e8cfdd5","Type":"ContainerStarted","Data":"cfc351cba18ba4bc44b9720c27245f2f93654a81e4ec23c08a623d54259a0afd"} Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.515294 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7f8bc7fb5-pm9cz" event={"ID":"bfe3615a-d5c7-4ad2-9e1e-ac4ab279c64f","Type":"ContainerStarted","Data":"a319bfe3fb0612f9d99a8e66d5133b92ced419f7482fd060b0c8ce82e781494d"} Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.516522 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66b4f6f898-cqrd7" event={"ID":"f92cf54c-1bcd-4a73-86b2-e4407908953d","Type":"ContainerStarted","Data":"7869cbb02eb9fe3d994aadd10fb4a54e775783c874771cb2798e5b319cbebb56"} Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.517267 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-64989647d4-6pkqv" event={"ID":"1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96","Type":"ContainerStarted","Data":"8db07cba946c59ab3385c6ef26ff4f94fe8d08edef3684cfd12769aca5d51e08"} Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.519079 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-86b7548d4c-d59d5" event={"ID":"95efa747-ba05-4c2f-86a8-037452c66764","Type":"ContainerStarted","Data":"4c622cbd06f6974121ddb0f54853a5de388507a48591486f077ee5962a178e7e"} Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.519820 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-784c978c5-v2fgq" event={"ID":"a9f4475f-8ecc-4bc3-a195-e5cf592a1324","Type":"ContainerStarted","Data":"b6411bcbcd7c0520d655077208b57569d0236f4a97d298fb363fa32bc65212bf"} Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.521099 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-748df9766b-sv8rl" event={"ID":"1fe815d0-1127-44a3-8d89-9964b3b5bbc2","Type":"ContainerStarted","Data":"96807f25bb2c8671417e04b529f94e7e3781d0ba004d828affda9bddccc9f697"} Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.521923 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6f75fb6b58-gz4gq" event={"ID":"3d773ce3-0d67-4965-b84e-86f922daad38","Type":"ContainerStarted","Data":"b79b3373ac6f5261c09b0b50194f2d54fa19c792555a5e133e1625849e4d4d78"} Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.522741 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b45f74f94-l9pgt" event={"ID":"ecfc5e0d-8538-497e-b578-0ef75e0031db","Type":"ContainerStarted","Data":"bcc7a4dd3a1b51163dc94e72a7643d388938e0c9d4f8d89e0c095d25df3a53c9"} Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.531111 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b9a78df2-ecf6-425a-ace1-f005622e0025-cert\") pod \"openstack-operator-controller-manager-9989f4965-wmbfg\" (UID: \"b9a78df2-ecf6-425a-ace1-f005622e0025\") " pod="openstack-operators/openstack-operator-controller-manager-9989f4965-wmbfg" Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.538560 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b9a78df2-ecf6-425a-ace1-f005622e0025-cert\") pod \"openstack-operator-controller-manager-9989f4965-wmbfg\" (UID: \"b9a78df2-ecf6-425a-ace1-f005622e0025\") " pod="openstack-operators/openstack-operator-controller-manager-9989f4965-wmbfg" Dec 05 10:39:24 crc kubenswrapper[4796]: E1205 10:39:24.571564 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-5bf496986d-rfkkm" podUID="e83951e6-5692-458d-aeba-ae8e6e8cfdd5" Dec 05 10:39:24 crc kubenswrapper[4796]: E1205 10:39:24.578757 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/infra-operator-controller-manager-64989647d4-6pkqv" podUID="1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96" Dec 05 10:39:24 crc kubenswrapper[4796]: E1205 10:39:24.608465 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-75f49469b9-rs7fq" podUID="622bf26a-5bd4-4936-bd06-ae5ec514f130" Dec 05 10:39:24 crc kubenswrapper[4796]: E1205 10:39:24.612366 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-784c978c5-v2fgq" podUID="a9f4475f-8ecc-4bc3-a195-e5cf592a1324" Dec 05 10:39:24 crc kubenswrapper[4796]: E1205 10:39:24.649328 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-79b74dfcd4-mhcb5" podUID="91dc33f2-985f-41d9-8c36-4c37aed1ec16" Dec 05 10:39:24 crc kubenswrapper[4796]: E1205 10:39:24.659451 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-589d6b8ccb-h27pk" podUID="b566972c-0250-4692-8152-31dc732b4147" Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.664120 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-9989f4965-wmbfg" Dec 05 10:39:24 crc kubenswrapper[4796]: E1205 10:39:24.669782 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7cd78796cb-gcttm" podUID="aa5e433c-e704-4cbf-8db3-6efe20814f65" Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.936603 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9314b19e-3947-4091-af58-82275f696602-cert\") pod \"openstack-baremetal-operator-controller-manager-8667b5c969stxrl\" (UID: \"9314b19e-3947-4091-af58-82275f696602\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8667b5c969stxrl" Dec 05 10:39:24 crc kubenswrapper[4796]: I1205 10:39:24.946657 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9314b19e-3947-4091-af58-82275f696602-cert\") pod \"openstack-baremetal-operator-controller-manager-8667b5c969stxrl\" (UID: \"9314b19e-3947-4091-af58-82275f696602\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-8667b5c969stxrl" Dec 05 10:39:25 crc kubenswrapper[4796]: I1205 10:39:25.106585 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8667b5c969stxrl" Dec 05 10:39:25 crc kubenswrapper[4796]: I1205 10:39:25.128803 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-9989f4965-wmbfg"] Dec 05 10:39:25 crc kubenswrapper[4796]: I1205 10:39:25.583823 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-64989647d4-6pkqv" event={"ID":"1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96","Type":"ContainerStarted","Data":"ac8400a38855a7b6e5155242370090032d8300bb7c0c6fcd0d87d35fddef9184"} Dec 05 10:39:25 crc kubenswrapper[4796]: E1205 10:39:25.588909 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:e1a731922a2da70b224ce5396602a07cec2b4a79efe7bcdc17c5e4509d16b5e4\\\"\"" pod="openstack-operators/infra-operator-controller-manager-64989647d4-6pkqv" podUID="1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96" Dec 05 10:39:25 crc kubenswrapper[4796]: I1205 10:39:25.595084 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd78796cb-gcttm" event={"ID":"aa5e433c-e704-4cbf-8db3-6efe20814f65","Type":"ContainerStarted","Data":"a92c8ab06c6b56077c221429232f4533bd1289ef16a1e720d918165d1a048b79"} Dec 05 10:39:25 crc kubenswrapper[4796]: I1205 10:39:25.595115 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd78796cb-gcttm" event={"ID":"aa5e433c-e704-4cbf-8db3-6efe20814f65","Type":"ContainerStarted","Data":"5cf93ffb437e1bd939e2ad57f83073ff45d6b0511bbcf9db42da7e23f858a2a2"} Dec 05 10:39:25 crc kubenswrapper[4796]: E1205 10:39:25.596529 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:49180c7bd4f0071e43ae7044260a3a97c4aa34fcbcb2d0d4573df449765ed391\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd78796cb-gcttm" podUID="aa5e433c-e704-4cbf-8db3-6efe20814f65" Dec 05 10:39:25 crc kubenswrapper[4796]: I1205 10:39:25.612545 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-8667b5c969stxrl"] Dec 05 10:39:25 crc kubenswrapper[4796]: I1205 10:39:25.613449 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-75f49469b9-rs7fq" event={"ID":"622bf26a-5bd4-4936-bd06-ae5ec514f130","Type":"ContainerStarted","Data":"fdc23d4633aa1df1cdc6716654060f5790f240eb27b0a81dec5cc852d86b5fc8"} Dec 05 10:39:25 crc kubenswrapper[4796]: I1205 10:39:25.613471 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-75f49469b9-rs7fq" event={"ID":"622bf26a-5bd4-4936-bd06-ae5ec514f130","Type":"ContainerStarted","Data":"6dc7228c5972367060bcff488f58486a2552f27777cb5b741e05eae708db2f4a"} Dec 05 10:39:25 crc kubenswrapper[4796]: E1205 10:39:25.614476 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:78d91c3cdd5eda41c2cd6d4a8491844e161dc33f6221be8cb822b2107d7ff46f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-75f49469b9-rs7fq" podUID="622bf26a-5bd4-4936-bd06-ae5ec514f130" Dec 05 10:39:25 crc kubenswrapper[4796]: I1205 10:39:25.617411 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-59f9cfd57b-c586s" event={"ID":"a1fec903-f9b8-49e1-a4f0-1526dcff64ea","Type":"ContainerStarted","Data":"59215537e46ee22effb5365c363ecccac3f2f127560ff68c04e45754c2557c9f"} Dec 05 10:39:25 crc kubenswrapper[4796]: I1205 10:39:25.633171 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-589d6b8ccb-h27pk" event={"ID":"b566972c-0250-4692-8152-31dc732b4147","Type":"ContainerStarted","Data":"3dddf31eeec89a15b265f8e9e9b791c3220d4aef96871c76ae9edd45c24d7767"} Dec 05 10:39:25 crc kubenswrapper[4796]: I1205 10:39:25.633198 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-589d6b8ccb-h27pk" event={"ID":"b566972c-0250-4692-8152-31dc732b4147","Type":"ContainerStarted","Data":"e34cedeecac9af54590e1714126a36ec0c97671f3f58dd0f8b92bab1f0121dbc"} Dec 05 10:39:25 crc kubenswrapper[4796]: E1205 10:39:25.634646 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:fd917de0cf800ec284ee0c3f2906a06d85ea18cb75a5b06c8eb305750467986d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-589d6b8ccb-h27pk" podUID="b566972c-0250-4692-8152-31dc732b4147" Dec 05 10:39:25 crc kubenswrapper[4796]: I1205 10:39:25.639139 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-784c978c5-v2fgq" event={"ID":"a9f4475f-8ecc-4bc3-a195-e5cf592a1324","Type":"ContainerStarted","Data":"5f589b20c537d6bf8c61f2ad68905e46da0c242fe54e256de881b3ecde0e90f3"} Dec 05 10:39:25 crc kubenswrapper[4796]: E1205 10:39:25.640796 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:1988aaf9cd245150cda123aaaa21718ccb552c47f1623b7d68804f13c47f2c6a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-784c978c5-v2fgq" podUID="a9f4475f-8ecc-4bc3-a195-e5cf592a1324" Dec 05 10:39:25 crc kubenswrapper[4796]: W1205 10:39:25.641232 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9314b19e_3947_4091_af58_82275f696602.slice/crio-87d97ab674807102496d2e050530237dff0c83f4ed6d90fdfa240565ef11149b WatchSource:0}: Error finding container 87d97ab674807102496d2e050530237dff0c83f4ed6d90fdfa240565ef11149b: Status 404 returned error can't find the container with id 87d97ab674807102496d2e050530237dff0c83f4ed6d90fdfa240565ef11149b Dec 05 10:39:25 crc kubenswrapper[4796]: I1205 10:39:25.651993 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-9989f4965-wmbfg" event={"ID":"b9a78df2-ecf6-425a-ace1-f005622e0025","Type":"ContainerStarted","Data":"7bd9567418df8e734bb8049fb8c5e673939dfafda1e0bba7f3756fbb4818e209"} Dec 05 10:39:25 crc kubenswrapper[4796]: I1205 10:39:25.652047 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-9989f4965-wmbfg" event={"ID":"b9a78df2-ecf6-425a-ace1-f005622e0025","Type":"ContainerStarted","Data":"aa8e7e136fd64ceaf9512b8e1e89b5e631547f38ac543e2fac6978ad10eebb11"} Dec 05 10:39:25 crc kubenswrapper[4796]: I1205 10:39:25.652060 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-9989f4965-wmbfg" event={"ID":"b9a78df2-ecf6-425a-ace1-f005622e0025","Type":"ContainerStarted","Data":"2a842398b68fa0a1a48e72166a3d42ee2b853f08207286cebc53856c44b36f7f"} Dec 05 10:39:25 crc kubenswrapper[4796]: I1205 10:39:25.652439 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-9989f4965-wmbfg" Dec 05 10:39:25 crc kubenswrapper[4796]: I1205 10:39:25.665300 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5bf496986d-rfkkm" event={"ID":"e83951e6-5692-458d-aeba-ae8e6e8cfdd5","Type":"ContainerStarted","Data":"802d174ea7e0d3bd9340372146ca3dc5dd25dc290c5554ac75155e896ea809b0"} Dec 05 10:39:25 crc kubenswrapper[4796]: E1205 10:39:25.670317 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:f076b8d9e85881d9c3cb5272b13db7f5e05d2e9da884c17b677a844112831907\\\"\"" pod="openstack-operators/swift-operator-controller-manager-5bf496986d-rfkkm" podUID="e83951e6-5692-458d-aeba-ae8e6e8cfdd5" Dec 05 10:39:25 crc kubenswrapper[4796]: I1205 10:39:25.685825 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79b74dfcd4-mhcb5" event={"ID":"91dc33f2-985f-41d9-8c36-4c37aed1ec16","Type":"ContainerStarted","Data":"bd7ad32e8e052171b598a9568de9c7a1040d634608ac475a78b6cae105cc2422"} Dec 05 10:39:25 crc kubenswrapper[4796]: I1205 10:39:25.685859 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79b74dfcd4-mhcb5" event={"ID":"91dc33f2-985f-41d9-8c36-4c37aed1ec16","Type":"ContainerStarted","Data":"86b8d3550e90da0908595e20726894222b6d544b9685fc086d51cea91c157eae"} Dec 05 10:39:25 crc kubenswrapper[4796]: I1205 10:39:25.700799 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mlsnn" event={"ID":"ae34e165-a87d-4395-99c5-1a9f7129e6fe","Type":"ContainerStarted","Data":"82af6a4ae639e0df4548c54c16a4083944d193fd45fe6bc3d67081a8de083227"} Dec 05 10:39:25 crc kubenswrapper[4796]: E1205 10:39:25.705777 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mlsnn" podUID="ae34e165-a87d-4395-99c5-1a9f7129e6fe" Dec 05 10:39:25 crc kubenswrapper[4796]: E1205 10:39:25.710040 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:debe5d6d29a007374b270b0e114e69b2136eee61dabab8576baf4010c951edb9\\\"\"" pod="openstack-operators/nova-operator-controller-manager-79b74dfcd4-mhcb5" podUID="91dc33f2-985f-41d9-8c36-4c37aed1ec16" Dec 05 10:39:25 crc kubenswrapper[4796]: I1205 10:39:25.836414 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-9989f4965-wmbfg" podStartSLOduration=2.8363997420000002 podStartE2EDuration="2.836399742s" podCreationTimestamp="2025-12-05 10:39:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:39:25.828496241 +0000 UTC m=+712.116601755" watchObservedRunningTime="2025-12-05 10:39:25.836399742 +0000 UTC m=+712.124505255" Dec 05 10:39:26 crc kubenswrapper[4796]: I1205 10:39:26.709382 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8667b5c969stxrl" event={"ID":"9314b19e-3947-4091-af58-82275f696602","Type":"ContainerStarted","Data":"87d97ab674807102496d2e050530237dff0c83f4ed6d90fdfa240565ef11149b"} Dec 05 10:39:26 crc kubenswrapper[4796]: E1205 10:39:26.710528 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:49180c7bd4f0071e43ae7044260a3a97c4aa34fcbcb2d0d4573df449765ed391\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd78796cb-gcttm" podUID="aa5e433c-e704-4cbf-8db3-6efe20814f65" Dec 05 10:39:26 crc kubenswrapper[4796]: E1205 10:39:26.711199 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:debe5d6d29a007374b270b0e114e69b2136eee61dabab8576baf4010c951edb9\\\"\"" pod="openstack-operators/nova-operator-controller-manager-79b74dfcd4-mhcb5" podUID="91dc33f2-985f-41d9-8c36-4c37aed1ec16" Dec 05 10:39:26 crc kubenswrapper[4796]: E1205 10:39:26.711547 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mlsnn" podUID="ae34e165-a87d-4395-99c5-1a9f7129e6fe" Dec 05 10:39:26 crc kubenswrapper[4796]: E1205 10:39:26.711877 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:e1a731922a2da70b224ce5396602a07cec2b4a79efe7bcdc17c5e4509d16b5e4\\\"\"" pod="openstack-operators/infra-operator-controller-manager-64989647d4-6pkqv" podUID="1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96" Dec 05 10:39:26 crc kubenswrapper[4796]: E1205 10:39:26.712247 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:78d91c3cdd5eda41c2cd6d4a8491844e161dc33f6221be8cb822b2107d7ff46f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-75f49469b9-rs7fq" podUID="622bf26a-5bd4-4936-bd06-ae5ec514f130" Dec 05 10:39:26 crc kubenswrapper[4796]: E1205 10:39:26.712273 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:fd917de0cf800ec284ee0c3f2906a06d85ea18cb75a5b06c8eb305750467986d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-589d6b8ccb-h27pk" podUID="b566972c-0250-4692-8152-31dc732b4147" Dec 05 10:39:26 crc kubenswrapper[4796]: E1205 10:39:26.712327 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:1988aaf9cd245150cda123aaaa21718ccb552c47f1623b7d68804f13c47f2c6a\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-784c978c5-v2fgq" podUID="a9f4475f-8ecc-4bc3-a195-e5cf592a1324" Dec 05 10:39:26 crc kubenswrapper[4796]: E1205 10:39:26.712783 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:f076b8d9e85881d9c3cb5272b13db7f5e05d2e9da884c17b677a844112831907\\\"\"" pod="openstack-operators/swift-operator-controller-manager-5bf496986d-rfkkm" podUID="e83951e6-5692-458d-aeba-ae8e6e8cfdd5" Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.776388 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b45f74f94-l9pgt" event={"ID":"ecfc5e0d-8538-497e-b578-0ef75e0031db","Type":"ContainerStarted","Data":"99039b0472219711b802517c2a1e35e4f5eed97dbfe3256987ed7f72ad798c6f"} Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.778402 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-59f9cfd57b-c586s" event={"ID":"a1fec903-f9b8-49e1-a4f0-1526dcff64ea","Type":"ContainerStarted","Data":"a3d85220e1bc6ebc88a1b4f1ecd82f5d298e806a63cb54de5da0386ed8b550fd"} Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.778442 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-59f9cfd57b-c586s" event={"ID":"a1fec903-f9b8-49e1-a4f0-1526dcff64ea","Type":"ContainerStarted","Data":"f00159fb96d73f3cc37b7e960b27eda5df6526e2ef37ebf5e02972d88c50cf0e"} Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.778582 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-59f9cfd57b-c586s" Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.788640 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6869548bb4-wsr9d" event={"ID":"845825d1-623f-4e06-9f2c-d045910eee1a","Type":"ContainerStarted","Data":"414b9bf205b64273fc0280b73ce0b360390fed7fe58c0fce23b927d54becad25"} Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.788673 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6869548bb4-wsr9d" event={"ID":"845825d1-623f-4e06-9f2c-d045910eee1a","Type":"ContainerStarted","Data":"a0dd33bf8cffa12befe641ce0b05cb0e0b87c35cace7da26199ffae59bdb2ca5"} Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.788799 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6869548bb4-wsr9d" Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.790119 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7f8bc7fb5-pm9cz" event={"ID":"bfe3615a-d5c7-4ad2-9e1e-ac4ab279c64f","Type":"ContainerStarted","Data":"7756d86bdb812eba92a87efaead84403e05de0552f8c20426c557731a07ae1dd"} Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.790148 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7f8bc7fb5-pm9cz" event={"ID":"bfe3615a-d5c7-4ad2-9e1e-ac4ab279c64f","Type":"ContainerStarted","Data":"8962676377ddff48ebbe3b68fb4d9ccec2e053ca090d23072ac5f92dc426aec0"} Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.790555 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7f8bc7fb5-pm9cz" Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.799599 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66b4f6f898-cqrd7" event={"ID":"f92cf54c-1bcd-4a73-86b2-e4407908953d","Type":"ContainerStarted","Data":"a3416c2cb614a00558a547b44f23cf16bfccc471a3fd7c7f6ff3c03c7c0eb7e1"} Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.799631 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66b4f6f898-cqrd7" event={"ID":"f92cf54c-1bcd-4a73-86b2-e4407908953d","Type":"ContainerStarted","Data":"10ae9ad9c36f45469341ac5f8b48682d296ff98d4c2ef63416da654b26a29b8a"} Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.800000 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-66b4f6f898-cqrd7" Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.801644 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-59f9cfd57b-c586s" podStartSLOduration=2.489987477 podStartE2EDuration="10.801635531s" podCreationTimestamp="2025-12-05 10:39:23 +0000 UTC" firstStartedPulling="2025-12-05 10:39:24.524468659 +0000 UTC m=+710.812574172" lastFinishedPulling="2025-12-05 10:39:32.836116712 +0000 UTC m=+719.124222226" observedRunningTime="2025-12-05 10:39:33.796326342 +0000 UTC m=+720.084431855" watchObservedRunningTime="2025-12-05 10:39:33.801635531 +0000 UTC m=+720.089741043" Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.818840 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-86b7548d4c-d59d5" event={"ID":"95efa747-ba05-4c2f-86a8-037452c66764","Type":"ContainerStarted","Data":"4a6e438ef7a177f8dac1707d97725d7c3aa84f58edd8259d300d17a6172897fd"} Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.818883 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-86b7548d4c-d59d5" event={"ID":"95efa747-ba05-4c2f-86a8-037452c66764","Type":"ContainerStarted","Data":"b6e7efa3d77d89522a05bd983194aabe59a91f65a34d075550d648c81a29c156"} Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.818899 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-86b7548d4c-d59d5" Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.821389 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7f8bc7fb5-pm9cz" podStartSLOduration=2.237161487 podStartE2EDuration="10.821375588s" podCreationTimestamp="2025-12-05 10:39:23 +0000 UTC" firstStartedPulling="2025-12-05 10:39:24.242701902 +0000 UTC m=+710.530807415" lastFinishedPulling="2025-12-05 10:39:32.826916002 +0000 UTC m=+719.115021516" observedRunningTime="2025-12-05 10:39:33.819846792 +0000 UTC m=+720.107952304" watchObservedRunningTime="2025-12-05 10:39:33.821375588 +0000 UTC m=+720.109481100" Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.836110 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-59c7d85948-v5lcv" event={"ID":"ba227292-a494-43c0-9fbd-addbd8f48b6f","Type":"ContainerStarted","Data":"4330dbd856c589942c8f945123cd8e3b1584e62104058d41bf326c2aac98a373"} Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.836156 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-59c7d85948-v5lcv" event={"ID":"ba227292-a494-43c0-9fbd-addbd8f48b6f","Type":"ContainerStarted","Data":"417a8441c6d2e6a1c0a060c9b18126bd9870c285d331acd82340b2d013013655"} Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.836220 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-59c7d85948-v5lcv" Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.847475 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6869548bb4-wsr9d" podStartSLOduration=2.431497578 podStartE2EDuration="10.847463268s" podCreationTimestamp="2025-12-05 10:39:23 +0000 UTC" firstStartedPulling="2025-12-05 10:39:24.409296542 +0000 UTC m=+710.697402055" lastFinishedPulling="2025-12-05 10:39:32.825262232 +0000 UTC m=+719.113367745" observedRunningTime="2025-12-05 10:39:33.84562527 +0000 UTC m=+720.133730773" watchObservedRunningTime="2025-12-05 10:39:33.847463268 +0000 UTC m=+720.135568781" Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.848941 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8667b5c969stxrl" event={"ID":"9314b19e-3947-4091-af58-82275f696602","Type":"ContainerStarted","Data":"94381c3e3e9625328691842bd55ffdd1161912fb28fcc799f6a7d53380c5f40b"} Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.856752 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-847b767f55-wqhnd" event={"ID":"8e886ef4-4f20-49e6-93d8-d011ac192923","Type":"ContainerStarted","Data":"91983e40725971ad2aef70ea7f98ddef1134a78fa7b9b550363dbf7f0430a082"} Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.867315 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-66b4f6f898-cqrd7" podStartSLOduration=2.324276739 podStartE2EDuration="10.867298866s" podCreationTimestamp="2025-12-05 10:39:23 +0000 UTC" firstStartedPulling="2025-12-05 10:39:24.283860231 +0000 UTC m=+710.571965745" lastFinishedPulling="2025-12-05 10:39:32.82688237 +0000 UTC m=+719.114987872" observedRunningTime="2025-12-05 10:39:33.864593756 +0000 UTC m=+720.152699270" watchObservedRunningTime="2025-12-05 10:39:33.867298866 +0000 UTC m=+720.155404379" Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.880970 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-748df9766b-sv8rl" event={"ID":"1fe815d0-1127-44a3-8d89-9964b3b5bbc2","Type":"ContainerStarted","Data":"c357964bccfc5d7d21c4ea01cad4a7d6dd26acaf6be07c1199b20ed94617aca3"} Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.881507 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-748df9766b-sv8rl" Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.902169 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6f75fb6b58-gz4gq" event={"ID":"3d773ce3-0d67-4965-b84e-86f922daad38","Type":"ContainerStarted","Data":"53d567c488fe3b7ab24566594f44ab054e1d1e0c0551575919fb8fe3fef9440f"} Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.907577 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5644f4c99-b5lst" event={"ID":"42939bc6-488c-401e-a313-3b5cc9e75f3b","Type":"ContainerStarted","Data":"563a630537cce562b70600a06c405c9b28faf53bc40c9717eaf3b26b9bf9505f"} Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.911146 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-7c55bc5499-tx2js" event={"ID":"98b514bf-2dd0-4d60-9141-d70dead159cb","Type":"ContainerStarted","Data":"858c035f0c51ef445952042d08aa5046c43c01a010d003cf116530c10f9658fd"} Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.911172 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-7c55bc5499-tx2js" event={"ID":"98b514bf-2dd0-4d60-9141-d70dead159cb","Type":"ContainerStarted","Data":"79dc1f126606ba99e8fa994862081464d33d55ac803ea1cd6d48cc4fd849decd"} Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.911262 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-7c55bc5499-tx2js" Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.914991 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-86b7548d4c-d59d5" podStartSLOduration=2.082330484 podStartE2EDuration="10.914981643s" podCreationTimestamp="2025-12-05 10:39:23 +0000 UTC" firstStartedPulling="2025-12-05 10:39:23.988591901 +0000 UTC m=+710.276697414" lastFinishedPulling="2025-12-05 10:39:32.82124306 +0000 UTC m=+719.109348573" observedRunningTime="2025-12-05 10:39:33.889644226 +0000 UTC m=+720.177749738" watchObservedRunningTime="2025-12-05 10:39:33.914981643 +0000 UTC m=+720.203087157" Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.916776 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-59c7d85948-v5lcv" podStartSLOduration=2.344127235 podStartE2EDuration="10.916771822s" podCreationTimestamp="2025-12-05 10:39:23 +0000 UTC" firstStartedPulling="2025-12-05 10:39:24.283568714 +0000 UTC m=+710.571674227" lastFinishedPulling="2025-12-05 10:39:32.856213302 +0000 UTC m=+719.144318814" observedRunningTime="2025-12-05 10:39:33.913174093 +0000 UTC m=+720.201279605" watchObservedRunningTime="2025-12-05 10:39:33.916771822 +0000 UTC m=+720.204877334" Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.925924 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-db55fc494-vtkgg" event={"ID":"09474b29-37f5-4e66-9314-6af690b94758","Type":"ContainerStarted","Data":"ca66a21425d33ce7d4db1a44dbb44773a0e2d931181b9685f91127120761d50f"} Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.926452 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-db55fc494-vtkgg" Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.930926 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-7c55bc5499-tx2js" podStartSLOduration=2.188169538 podStartE2EDuration="10.930920069s" podCreationTimestamp="2025-12-05 10:39:23 +0000 UTC" firstStartedPulling="2025-12-05 10:39:24.078076447 +0000 UTC m=+710.366181960" lastFinishedPulling="2025-12-05 10:39:32.820826978 +0000 UTC m=+719.108932491" observedRunningTime="2025-12-05 10:39:33.929839247 +0000 UTC m=+720.217944759" watchObservedRunningTime="2025-12-05 10:39:33.930920069 +0000 UTC m=+720.219025582" Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.956825 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-748df9766b-sv8rl" podStartSLOduration=2.049402746 podStartE2EDuration="10.956813515s" podCreationTimestamp="2025-12-05 10:39:23 +0000 UTC" firstStartedPulling="2025-12-05 10:39:23.917400043 +0000 UTC m=+710.205505556" lastFinishedPulling="2025-12-05 10:39:32.824810812 +0000 UTC m=+719.112916325" observedRunningTime="2025-12-05 10:39:33.953750291 +0000 UTC m=+720.241855805" watchObservedRunningTime="2025-12-05 10:39:33.956813515 +0000 UTC m=+720.244919027" Dec 05 10:39:33 crc kubenswrapper[4796]: I1205 10:39:33.983306 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-db55fc494-vtkgg" podStartSLOduration=2.184483747 podStartE2EDuration="10.983291368s" podCreationTimestamp="2025-12-05 10:39:23 +0000 UTC" firstStartedPulling="2025-12-05 10:39:24.02611403 +0000 UTC m=+710.314219543" lastFinishedPulling="2025-12-05 10:39:32.824921661 +0000 UTC m=+719.113027164" observedRunningTime="2025-12-05 10:39:33.979613879 +0000 UTC m=+720.267719392" watchObservedRunningTime="2025-12-05 10:39:33.983291368 +0000 UTC m=+720.271396880" Dec 05 10:39:34 crc kubenswrapper[4796]: I1205 10:39:34.670270 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-9989f4965-wmbfg" Dec 05 10:39:34 crc kubenswrapper[4796]: I1205 10:39:34.932418 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6f75fb6b58-gz4gq" event={"ID":"3d773ce3-0d67-4965-b84e-86f922daad38","Type":"ContainerStarted","Data":"8e243df8991b00b0d64cf63abb80fa682eef550124bb47d99a5b0264df321964"} Dec 05 10:39:34 crc kubenswrapper[4796]: I1205 10:39:34.932564 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-6f75fb6b58-gz4gq" Dec 05 10:39:34 crc kubenswrapper[4796]: I1205 10:39:34.934027 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b45f74f94-l9pgt" event={"ID":"ecfc5e0d-8538-497e-b578-0ef75e0031db","Type":"ContainerStarted","Data":"fde2aef12ec839490aff57b455f2515fcec3538134c10b63eefbe43d5c1cfbf4"} Dec 05 10:39:34 crc kubenswrapper[4796]: I1205 10:39:34.934137 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6b45f74f94-l9pgt" Dec 05 10:39:34 crc kubenswrapper[4796]: I1205 10:39:34.936693 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5644f4c99-b5lst" event={"ID":"42939bc6-488c-401e-a313-3b5cc9e75f3b","Type":"ContainerStarted","Data":"5259f4c2dece85c27cb1506021fcd1d1d48c81d1aad369d1d60fe079646bae2c"} Dec 05 10:39:34 crc kubenswrapper[4796]: I1205 10:39:34.936801 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-5644f4c99-b5lst" Dec 05 10:39:34 crc kubenswrapper[4796]: I1205 10:39:34.937902 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8667b5c969stxrl" event={"ID":"9314b19e-3947-4091-af58-82275f696602","Type":"ContainerStarted","Data":"6dd74c081f7f05833aba97bfc82f68afd01f4487e303aa736a7063c8757ffecf"} Dec 05 10:39:34 crc kubenswrapper[4796]: I1205 10:39:34.937972 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8667b5c969stxrl" Dec 05 10:39:34 crc kubenswrapper[4796]: I1205 10:39:34.939640 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-847b767f55-wqhnd" event={"ID":"8e886ef4-4f20-49e6-93d8-d011ac192923","Type":"ContainerStarted","Data":"5d4a41553048b36e31d656df3b71e806349a9cf37dab2a072ddf7338bb93c70e"} Dec 05 10:39:34 crc kubenswrapper[4796]: I1205 10:39:34.939737 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-847b767f55-wqhnd" Dec 05 10:39:34 crc kubenswrapper[4796]: I1205 10:39:34.941047 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-748df9766b-sv8rl" event={"ID":"1fe815d0-1127-44a3-8d89-9964b3b5bbc2","Type":"ContainerStarted","Data":"c093e42666491bc26eb13100a93e5abb03c666776dfae90beeb45949c245c7f1"} Dec 05 10:39:34 crc kubenswrapper[4796]: I1205 10:39:34.942618 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-db55fc494-vtkgg" event={"ID":"09474b29-37f5-4e66-9314-6af690b94758","Type":"ContainerStarted","Data":"c280e03b6525e383fadf4c4db880fd51d46890c02d7af66897e7227c4bed1a3a"} Dec 05 10:39:34 crc kubenswrapper[4796]: I1205 10:39:34.949102 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-6f75fb6b58-gz4gq" podStartSLOduration=3.103462294 podStartE2EDuration="11.949091044s" podCreationTimestamp="2025-12-05 10:39:23 +0000 UTC" firstStartedPulling="2025-12-05 10:39:24.019834204 +0000 UTC m=+710.307939717" lastFinishedPulling="2025-12-05 10:39:32.865462953 +0000 UTC m=+719.153568467" observedRunningTime="2025-12-05 10:39:34.944498233 +0000 UTC m=+721.232603746" watchObservedRunningTime="2025-12-05 10:39:34.949091044 +0000 UTC m=+721.237196557" Dec 05 10:39:34 crc kubenswrapper[4796]: I1205 10:39:34.960355 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-5644f4c99-b5lst" podStartSLOduration=3.061775935 podStartE2EDuration="11.960337723s" podCreationTimestamp="2025-12-05 10:39:23 +0000 UTC" firstStartedPulling="2025-12-05 10:39:23.940717414 +0000 UTC m=+710.228822927" lastFinishedPulling="2025-12-05 10:39:32.839279203 +0000 UTC m=+719.127384715" observedRunningTime="2025-12-05 10:39:34.959117307 +0000 UTC m=+721.247222820" watchObservedRunningTime="2025-12-05 10:39:34.960337723 +0000 UTC m=+721.248443236" Dec 05 10:39:34 crc kubenswrapper[4796]: I1205 10:39:34.981853 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8667b5c969stxrl" podStartSLOduration=4.79090058 podStartE2EDuration="11.981838324s" podCreationTimestamp="2025-12-05 10:39:23 +0000 UTC" firstStartedPulling="2025-12-05 10:39:25.651803191 +0000 UTC m=+711.939908704" lastFinishedPulling="2025-12-05 10:39:32.842740935 +0000 UTC m=+719.130846448" observedRunningTime="2025-12-05 10:39:34.977093717 +0000 UTC m=+721.265199230" watchObservedRunningTime="2025-12-05 10:39:34.981838324 +0000 UTC m=+721.269943837" Dec 05 10:39:34 crc kubenswrapper[4796]: I1205 10:39:34.990616 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-847b767f55-wqhnd" podStartSLOduration=3.242323863 podStartE2EDuration="11.990602432s" podCreationTimestamp="2025-12-05 10:39:23 +0000 UTC" firstStartedPulling="2025-12-05 10:39:24.087660701 +0000 UTC m=+710.375766213" lastFinishedPulling="2025-12-05 10:39:32.835939268 +0000 UTC m=+719.124044782" observedRunningTime="2025-12-05 10:39:34.987546883 +0000 UTC m=+721.275652396" watchObservedRunningTime="2025-12-05 10:39:34.990602432 +0000 UTC m=+721.278707935" Dec 05 10:39:34 crc kubenswrapper[4796]: I1205 10:39:34.997940 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6b45f74f94-l9pgt" podStartSLOduration=3.097516712 podStartE2EDuration="11.997928925s" podCreationTimestamp="2025-12-05 10:39:23 +0000 UTC" firstStartedPulling="2025-12-05 10:39:23.917416133 +0000 UTC m=+710.205521646" lastFinishedPulling="2025-12-05 10:39:32.817828346 +0000 UTC m=+719.105933859" observedRunningTime="2025-12-05 10:39:34.997129291 +0000 UTC m=+721.285234804" watchObservedRunningTime="2025-12-05 10:39:34.997928925 +0000 UTC m=+721.286034438" Dec 05 10:39:35 crc kubenswrapper[4796]: I1205 10:39:35.177862 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:39:35 crc kubenswrapper[4796]: I1205 10:39:35.177932 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:39:40 crc kubenswrapper[4796]: I1205 10:39:40.616859 4796 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 05 10:39:40 crc kubenswrapper[4796]: I1205 10:39:40.977347 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79b74dfcd4-mhcb5" event={"ID":"91dc33f2-985f-41d9-8c36-4c37aed1ec16","Type":"ContainerStarted","Data":"13991cb339221ccf5da03baecbe5bec90c9ce9a11b320cd9b82100e1b7b85f2a"} Dec 05 10:39:40 crc kubenswrapper[4796]: I1205 10:39:40.977752 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79b74dfcd4-mhcb5" Dec 05 10:39:40 crc kubenswrapper[4796]: I1205 10:39:40.980671 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5bf496986d-rfkkm" event={"ID":"e83951e6-5692-458d-aeba-ae8e6e8cfdd5","Type":"ContainerStarted","Data":"1e6746e395255bc107629c57e845c1549488eae71a186361ffce2eeadfb2e2ce"} Dec 05 10:39:40 crc kubenswrapper[4796]: I1205 10:39:40.981113 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5bf496986d-rfkkm" Dec 05 10:39:40 crc kubenswrapper[4796]: I1205 10:39:40.988549 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79b74dfcd4-mhcb5" podStartSLOduration=2.225040366 podStartE2EDuration="17.988541136s" podCreationTimestamp="2025-12-05 10:39:23 +0000 UTC" firstStartedPulling="2025-12-05 10:39:24.501340373 +0000 UTC m=+710.789445887" lastFinishedPulling="2025-12-05 10:39:40.264841144 +0000 UTC m=+726.552946657" observedRunningTime="2025-12-05 10:39:40.98727788 +0000 UTC m=+727.275383393" watchObservedRunningTime="2025-12-05 10:39:40.988541136 +0000 UTC m=+727.276646649" Dec 05 10:39:41 crc kubenswrapper[4796]: I1205 10:39:41.000218 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5bf496986d-rfkkm" podStartSLOduration=2.155281604 podStartE2EDuration="18.000203348s" podCreationTimestamp="2025-12-05 10:39:23 +0000 UTC" firstStartedPulling="2025-12-05 10:39:24.422205419 +0000 UTC m=+710.710310933" lastFinishedPulling="2025-12-05 10:39:40.267127164 +0000 UTC m=+726.555232677" observedRunningTime="2025-12-05 10:39:40.996887829 +0000 UTC m=+727.284993352" watchObservedRunningTime="2025-12-05 10:39:41.000203348 +0000 UTC m=+727.288308860" Dec 05 10:39:42 crc kubenswrapper[4796]: I1205 10:39:42.728090 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jjr7j"] Dec 05 10:39:42 crc kubenswrapper[4796]: I1205 10:39:42.730203 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjr7j" Dec 05 10:39:42 crc kubenswrapper[4796]: I1205 10:39:42.736978 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jjr7j"] Dec 05 10:39:42 crc kubenswrapper[4796]: I1205 10:39:42.793585 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76c05234-07b8-4149-8698-9fc1d51c586b-utilities\") pod \"certified-operators-jjr7j\" (UID: \"76c05234-07b8-4149-8698-9fc1d51c586b\") " pod="openshift-marketplace/certified-operators-jjr7j" Dec 05 10:39:42 crc kubenswrapper[4796]: I1205 10:39:42.793659 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddzpr\" (UniqueName: \"kubernetes.io/projected/76c05234-07b8-4149-8698-9fc1d51c586b-kube-api-access-ddzpr\") pod \"certified-operators-jjr7j\" (UID: \"76c05234-07b8-4149-8698-9fc1d51c586b\") " pod="openshift-marketplace/certified-operators-jjr7j" Dec 05 10:39:42 crc kubenswrapper[4796]: I1205 10:39:42.793793 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76c05234-07b8-4149-8698-9fc1d51c586b-catalog-content\") pod \"certified-operators-jjr7j\" (UID: \"76c05234-07b8-4149-8698-9fc1d51c586b\") " pod="openshift-marketplace/certified-operators-jjr7j" Dec 05 10:39:42 crc kubenswrapper[4796]: I1205 10:39:42.894979 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddzpr\" (UniqueName: \"kubernetes.io/projected/76c05234-07b8-4149-8698-9fc1d51c586b-kube-api-access-ddzpr\") pod \"certified-operators-jjr7j\" (UID: \"76c05234-07b8-4149-8698-9fc1d51c586b\") " pod="openshift-marketplace/certified-operators-jjr7j" Dec 05 10:39:42 crc kubenswrapper[4796]: I1205 10:39:42.895124 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76c05234-07b8-4149-8698-9fc1d51c586b-catalog-content\") pod \"certified-operators-jjr7j\" (UID: \"76c05234-07b8-4149-8698-9fc1d51c586b\") " pod="openshift-marketplace/certified-operators-jjr7j" Dec 05 10:39:42 crc kubenswrapper[4796]: I1205 10:39:42.895189 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76c05234-07b8-4149-8698-9fc1d51c586b-utilities\") pod \"certified-operators-jjr7j\" (UID: \"76c05234-07b8-4149-8698-9fc1d51c586b\") " pod="openshift-marketplace/certified-operators-jjr7j" Dec 05 10:39:42 crc kubenswrapper[4796]: I1205 10:39:42.895869 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76c05234-07b8-4149-8698-9fc1d51c586b-catalog-content\") pod \"certified-operators-jjr7j\" (UID: \"76c05234-07b8-4149-8698-9fc1d51c586b\") " pod="openshift-marketplace/certified-operators-jjr7j" Dec 05 10:39:42 crc kubenswrapper[4796]: I1205 10:39:42.896184 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76c05234-07b8-4149-8698-9fc1d51c586b-utilities\") pod \"certified-operators-jjr7j\" (UID: \"76c05234-07b8-4149-8698-9fc1d51c586b\") " pod="openshift-marketplace/certified-operators-jjr7j" Dec 05 10:39:42 crc kubenswrapper[4796]: I1205 10:39:42.911699 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddzpr\" (UniqueName: \"kubernetes.io/projected/76c05234-07b8-4149-8698-9fc1d51c586b-kube-api-access-ddzpr\") pod \"certified-operators-jjr7j\" (UID: \"76c05234-07b8-4149-8698-9fc1d51c586b\") " pod="openshift-marketplace/certified-operators-jjr7j" Dec 05 10:39:43 crc kubenswrapper[4796]: I1205 10:39:43.050881 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjr7j" Dec 05 10:39:43 crc kubenswrapper[4796]: I1205 10:39:43.330961 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-748df9766b-sv8rl" Dec 05 10:39:43 crc kubenswrapper[4796]: I1205 10:39:43.342416 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6b45f74f94-l9pgt" Dec 05 10:39:43 crc kubenswrapper[4796]: I1205 10:39:43.348312 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-5644f4c99-b5lst" Dec 05 10:39:43 crc kubenswrapper[4796]: I1205 10:39:43.380726 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-6f75fb6b58-gz4gq" Dec 05 10:39:43 crc kubenswrapper[4796]: I1205 10:39:43.399720 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-db55fc494-vtkgg" Dec 05 10:39:43 crc kubenswrapper[4796]: I1205 10:39:43.430664 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-86b7548d4c-d59d5" Dec 05 10:39:43 crc kubenswrapper[4796]: I1205 10:39:43.479665 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-7c55bc5499-tx2js" Dec 05 10:39:43 crc kubenswrapper[4796]: I1205 10:39:43.496443 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-847b767f55-wqhnd" Dec 05 10:39:43 crc kubenswrapper[4796]: I1205 10:39:43.548878 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-59c7d85948-v5lcv" Dec 05 10:39:43 crc kubenswrapper[4796]: I1205 10:39:43.583881 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-66b4f6f898-cqrd7" Dec 05 10:39:43 crc kubenswrapper[4796]: I1205 10:39:43.590892 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7f8bc7fb5-pm9cz" Dec 05 10:39:43 crc kubenswrapper[4796]: I1205 10:39:43.601511 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6869548bb4-wsr9d" Dec 05 10:39:43 crc kubenswrapper[4796]: I1205 10:39:43.640316 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-59f9cfd57b-c586s" Dec 05 10:39:44 crc kubenswrapper[4796]: I1205 10:39:44.180759 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jjr7j"] Dec 05 10:39:45 crc kubenswrapper[4796]: I1205 10:39:45.003153 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjr7j" event={"ID":"76c05234-07b8-4149-8698-9fc1d51c586b","Type":"ContainerStarted","Data":"52b0b1d2d034055933d0fda108bfced43e097ed9886e3544c38731e4a4d1ac03"} Dec 05 10:39:45 crc kubenswrapper[4796]: I1205 10:39:45.112314 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-8667b5c969stxrl" Dec 05 10:39:48 crc kubenswrapper[4796]: I1205 10:39:48.021045 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-75f49469b9-rs7fq" event={"ID":"622bf26a-5bd4-4936-bd06-ae5ec514f130","Type":"ContainerStarted","Data":"0ccdf83b3a9da52c1c3c0f1434d0c6b6f9a7fc84d8d27f2b399415c0c51f45af"} Dec 05 10:39:48 crc kubenswrapper[4796]: I1205 10:39:48.021959 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-75f49469b9-rs7fq" Dec 05 10:39:48 crc kubenswrapper[4796]: I1205 10:39:48.023840 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mlsnn" event={"ID":"ae34e165-a87d-4395-99c5-1a9f7129e6fe","Type":"ContainerStarted","Data":"953b155d8ad8547127305452a669f9bbf78e76d95afd0db630e45be720a79a9d"} Dec 05 10:39:48 crc kubenswrapper[4796]: I1205 10:39:48.025514 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-589d6b8ccb-h27pk" event={"ID":"b566972c-0250-4692-8152-31dc732b4147","Type":"ContainerStarted","Data":"d771907c115ed2f60b721c6c27b7256f8a6e1c609bce3acbc671f43756bdb3b2"} Dec 05 10:39:48 crc kubenswrapper[4796]: I1205 10:39:48.025781 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-589d6b8ccb-h27pk" Dec 05 10:39:48 crc kubenswrapper[4796]: I1205 10:39:48.028245 4796 generic.go:334] "Generic (PLEG): container finished" podID="76c05234-07b8-4149-8698-9fc1d51c586b" containerID="9459374a1c1e1834cf18e80843c9b8b0358e6121b8fc53b3ddc73782af0eedd6" exitCode=0 Dec 05 10:39:48 crc kubenswrapper[4796]: I1205 10:39:48.028300 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjr7j" event={"ID":"76c05234-07b8-4149-8698-9fc1d51c586b","Type":"ContainerDied","Data":"9459374a1c1e1834cf18e80843c9b8b0358e6121b8fc53b3ddc73782af0eedd6"} Dec 05 10:39:48 crc kubenswrapper[4796]: I1205 10:39:48.037203 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-75f49469b9-rs7fq" podStartSLOduration=5.713364776 podStartE2EDuration="25.037193609s" podCreationTimestamp="2025-12-05 10:39:23 +0000 UTC" firstStartedPulling="2025-12-05 10:39:24.483885322 +0000 UTC m=+710.771990834" lastFinishedPulling="2025-12-05 10:39:43.807714153 +0000 UTC m=+730.095819667" observedRunningTime="2025-12-05 10:39:48.035303314 +0000 UTC m=+734.323408827" watchObservedRunningTime="2025-12-05 10:39:48.037193609 +0000 UTC m=+734.325299123" Dec 05 10:39:48 crc kubenswrapper[4796]: I1205 10:39:48.042931 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-64989647d4-6pkqv" event={"ID":"1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96","Type":"ContainerStarted","Data":"17ec0cadd3a479f6f8fd90b43e0ec8f884c0770c3189261522859a72104efa35"} Dec 05 10:39:48 crc kubenswrapper[4796]: I1205 10:39:48.043180 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-64989647d4-6pkqv" Dec 05 10:39:48 crc kubenswrapper[4796]: I1205 10:39:48.052698 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-589d6b8ccb-h27pk" podStartSLOduration=5.760624773 podStartE2EDuration="25.052673452s" podCreationTimestamp="2025-12-05 10:39:23 +0000 UTC" firstStartedPulling="2025-12-05 10:39:24.514592978 +0000 UTC m=+710.802698490" lastFinishedPulling="2025-12-05 10:39:43.806641656 +0000 UTC m=+730.094747169" observedRunningTime="2025-12-05 10:39:48.050257607 +0000 UTC m=+734.338363121" watchObservedRunningTime="2025-12-05 10:39:48.052673452 +0000 UTC m=+734.340778965" Dec 05 10:39:48 crc kubenswrapper[4796]: I1205 10:39:48.077099 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-mlsnn" podStartSLOduration=5.720855031 podStartE2EDuration="25.077084898s" podCreationTimestamp="2025-12-05 10:39:23 +0000 UTC" firstStartedPulling="2025-12-05 10:39:24.501187646 +0000 UTC m=+710.789293159" lastFinishedPulling="2025-12-05 10:39:43.857417513 +0000 UTC m=+730.145523026" observedRunningTime="2025-12-05 10:39:48.072742199 +0000 UTC m=+734.360847712" watchObservedRunningTime="2025-12-05 10:39:48.077084898 +0000 UTC m=+734.365190412" Dec 05 10:39:48 crc kubenswrapper[4796]: I1205 10:39:48.084428 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-64989647d4-6pkqv" podStartSLOduration=5.68685696 podStartE2EDuration="25.084412284s" podCreationTimestamp="2025-12-05 10:39:23 +0000 UTC" firstStartedPulling="2025-12-05 10:39:24.442616303 +0000 UTC m=+710.730721816" lastFinishedPulling="2025-12-05 10:39:43.840171627 +0000 UTC m=+730.128277140" observedRunningTime="2025-12-05 10:39:48.083739959 +0000 UTC m=+734.371845473" watchObservedRunningTime="2025-12-05 10:39:48.084412284 +0000 UTC m=+734.372517787" Dec 05 10:39:49 crc kubenswrapper[4796]: I1205 10:39:49.043379 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd78796cb-gcttm" event={"ID":"aa5e433c-e704-4cbf-8db3-6efe20814f65","Type":"ContainerStarted","Data":"9b7374ac432be9b8721c4ba3d7f7dfa7c7c2c4f36a737046dbbf213601eebe4b"} Dec 05 10:39:49 crc kubenswrapper[4796]: I1205 10:39:49.043846 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7cd78796cb-gcttm" Dec 05 10:39:49 crc kubenswrapper[4796]: I1205 10:39:49.046764 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-784c978c5-v2fgq" event={"ID":"a9f4475f-8ecc-4bc3-a195-e5cf592a1324","Type":"ContainerStarted","Data":"790c6c76fbaed3a1efdf90672fec5248a2d02f82dc24b4413504e2bd8a902e32"} Dec 05 10:39:49 crc kubenswrapper[4796]: I1205 10:39:49.047300 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-784c978c5-v2fgq" Dec 05 10:39:49 crc kubenswrapper[4796]: I1205 10:39:49.066181 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7cd78796cb-gcttm" podStartSLOduration=2.354797876 podStartE2EDuration="26.066153552s" podCreationTimestamp="2025-12-05 10:39:23 +0000 UTC" firstStartedPulling="2025-12-05 10:39:24.514586836 +0000 UTC m=+710.802692349" lastFinishedPulling="2025-12-05 10:39:48.225942512 +0000 UTC m=+734.514048025" observedRunningTime="2025-12-05 10:39:49.056320733 +0000 UTC m=+735.344426246" watchObservedRunningTime="2025-12-05 10:39:49.066153552 +0000 UTC m=+735.354259065" Dec 05 10:39:49 crc kubenswrapper[4796]: I1205 10:39:49.075755 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-784c978c5-v2fgq" podStartSLOduration=2.336376294 podStartE2EDuration="26.075738655s" podCreationTimestamp="2025-12-05 10:39:23 +0000 UTC" firstStartedPulling="2025-12-05 10:39:24.482393194 +0000 UTC m=+710.770498697" lastFinishedPulling="2025-12-05 10:39:48.221755544 +0000 UTC m=+734.509861058" observedRunningTime="2025-12-05 10:39:49.068516498 +0000 UTC m=+735.356622011" watchObservedRunningTime="2025-12-05 10:39:49.075738655 +0000 UTC m=+735.363844168" Dec 05 10:39:50 crc kubenswrapper[4796]: I1205 10:39:50.052474 4796 generic.go:334] "Generic (PLEG): container finished" podID="76c05234-07b8-4149-8698-9fc1d51c586b" containerID="eea7ad31cef248ed5f4bb9a62ed975d5cf98b01bfcb7a4e966dfd04efb50e235" exitCode=0 Dec 05 10:39:50 crc kubenswrapper[4796]: I1205 10:39:50.052508 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjr7j" event={"ID":"76c05234-07b8-4149-8698-9fc1d51c586b","Type":"ContainerDied","Data":"eea7ad31cef248ed5f4bb9a62ed975d5cf98b01bfcb7a4e966dfd04efb50e235"} Dec 05 10:39:51 crc kubenswrapper[4796]: I1205 10:39:51.065102 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjr7j" event={"ID":"76c05234-07b8-4149-8698-9fc1d51c586b","Type":"ContainerStarted","Data":"913b352ba684976e51684700d903c3e6c36fdf8dda643694dd1990b67694bb69"} Dec 05 10:39:51 crc kubenswrapper[4796]: I1205 10:39:51.081512 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jjr7j" podStartSLOduration=6.771051541 podStartE2EDuration="9.081495001s" podCreationTimestamp="2025-12-05 10:39:42 +0000 UTC" firstStartedPulling="2025-12-05 10:39:48.197199306 +0000 UTC m=+734.485304819" lastFinishedPulling="2025-12-05 10:39:50.507642766 +0000 UTC m=+736.795748279" observedRunningTime="2025-12-05 10:39:51.077997933 +0000 UTC m=+737.366103446" watchObservedRunningTime="2025-12-05 10:39:51.081495001 +0000 UTC m=+737.369600514" Dec 05 10:39:53 crc kubenswrapper[4796]: I1205 10:39:53.051959 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jjr7j" Dec 05 10:39:53 crc kubenswrapper[4796]: I1205 10:39:53.052220 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jjr7j" Dec 05 10:39:53 crc kubenswrapper[4796]: I1205 10:39:53.079224 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jjr7j" Dec 05 10:39:53 crc kubenswrapper[4796]: I1205 10:39:53.574474 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79b74dfcd4-mhcb5" Dec 05 10:39:53 crc kubenswrapper[4796]: I1205 10:39:53.709723 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-589d6b8ccb-h27pk" Dec 05 10:39:53 crc kubenswrapper[4796]: I1205 10:39:53.732614 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5bf496986d-rfkkm" Dec 05 10:39:53 crc kubenswrapper[4796]: I1205 10:39:53.748532 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-75f49469b9-rs7fq" Dec 05 10:39:53 crc kubenswrapper[4796]: I1205 10:39:53.845888 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7cd78796cb-gcttm" Dec 05 10:39:53 crc kubenswrapper[4796]: I1205 10:39:53.961434 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-784c978c5-v2fgq" Dec 05 10:39:54 crc kubenswrapper[4796]: I1205 10:39:54.037816 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-64989647d4-6pkqv" Dec 05 10:40:03 crc kubenswrapper[4796]: I1205 10:40:03.080519 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jjr7j" Dec 05 10:40:03 crc kubenswrapper[4796]: I1205 10:40:03.112367 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jjr7j"] Dec 05 10:40:03 crc kubenswrapper[4796]: I1205 10:40:03.126461 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jjr7j" podUID="76c05234-07b8-4149-8698-9fc1d51c586b" containerName="registry-server" containerID="cri-o://913b352ba684976e51684700d903c3e6c36fdf8dda643694dd1990b67694bb69" gracePeriod=2 Dec 05 10:40:03 crc kubenswrapper[4796]: I1205 10:40:03.459033 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjr7j" Dec 05 10:40:03 crc kubenswrapper[4796]: I1205 10:40:03.638446 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76c05234-07b8-4149-8698-9fc1d51c586b-utilities\") pod \"76c05234-07b8-4149-8698-9fc1d51c586b\" (UID: \"76c05234-07b8-4149-8698-9fc1d51c586b\") " Dec 05 10:40:03 crc kubenswrapper[4796]: I1205 10:40:03.638479 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76c05234-07b8-4149-8698-9fc1d51c586b-catalog-content\") pod \"76c05234-07b8-4149-8698-9fc1d51c586b\" (UID: \"76c05234-07b8-4149-8698-9fc1d51c586b\") " Dec 05 10:40:03 crc kubenswrapper[4796]: I1205 10:40:03.638524 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddzpr\" (UniqueName: \"kubernetes.io/projected/76c05234-07b8-4149-8698-9fc1d51c586b-kube-api-access-ddzpr\") pod \"76c05234-07b8-4149-8698-9fc1d51c586b\" (UID: \"76c05234-07b8-4149-8698-9fc1d51c586b\") " Dec 05 10:40:03 crc kubenswrapper[4796]: I1205 10:40:03.645222 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76c05234-07b8-4149-8698-9fc1d51c586b-kube-api-access-ddzpr" (OuterVolumeSpecName: "kube-api-access-ddzpr") pod "76c05234-07b8-4149-8698-9fc1d51c586b" (UID: "76c05234-07b8-4149-8698-9fc1d51c586b"). InnerVolumeSpecName "kube-api-access-ddzpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:40:03 crc kubenswrapper[4796]: I1205 10:40:03.648138 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76c05234-07b8-4149-8698-9fc1d51c586b-utilities" (OuterVolumeSpecName: "utilities") pod "76c05234-07b8-4149-8698-9fc1d51c586b" (UID: "76c05234-07b8-4149-8698-9fc1d51c586b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:40:03 crc kubenswrapper[4796]: I1205 10:40:03.674360 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76c05234-07b8-4149-8698-9fc1d51c586b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76c05234-07b8-4149-8698-9fc1d51c586b" (UID: "76c05234-07b8-4149-8698-9fc1d51c586b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:40:03 crc kubenswrapper[4796]: I1205 10:40:03.739600 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76c05234-07b8-4149-8698-9fc1d51c586b-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:03 crc kubenswrapper[4796]: I1205 10:40:03.739617 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76c05234-07b8-4149-8698-9fc1d51c586b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:03 crc kubenswrapper[4796]: I1205 10:40:03.739629 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddzpr\" (UniqueName: \"kubernetes.io/projected/76c05234-07b8-4149-8698-9fc1d51c586b-kube-api-access-ddzpr\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:04 crc kubenswrapper[4796]: I1205 10:40:04.132956 4796 generic.go:334] "Generic (PLEG): container finished" podID="76c05234-07b8-4149-8698-9fc1d51c586b" containerID="913b352ba684976e51684700d903c3e6c36fdf8dda643694dd1990b67694bb69" exitCode=0 Dec 05 10:40:04 crc kubenswrapper[4796]: I1205 10:40:04.132992 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjr7j" event={"ID":"76c05234-07b8-4149-8698-9fc1d51c586b","Type":"ContainerDied","Data":"913b352ba684976e51684700d903c3e6c36fdf8dda643694dd1990b67694bb69"} Dec 05 10:40:04 crc kubenswrapper[4796]: I1205 10:40:04.133015 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjr7j" event={"ID":"76c05234-07b8-4149-8698-9fc1d51c586b","Type":"ContainerDied","Data":"52b0b1d2d034055933d0fda108bfced43e097ed9886e3544c38731e4a4d1ac03"} Dec 05 10:40:04 crc kubenswrapper[4796]: I1205 10:40:04.133030 4796 scope.go:117] "RemoveContainer" containerID="913b352ba684976e51684700d903c3e6c36fdf8dda643694dd1990b67694bb69" Dec 05 10:40:04 crc kubenswrapper[4796]: I1205 10:40:04.133132 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjr7j" Dec 05 10:40:04 crc kubenswrapper[4796]: I1205 10:40:04.147575 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jjr7j"] Dec 05 10:40:04 crc kubenswrapper[4796]: I1205 10:40:04.151522 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jjr7j"] Dec 05 10:40:04 crc kubenswrapper[4796]: I1205 10:40:04.151931 4796 scope.go:117] "RemoveContainer" containerID="eea7ad31cef248ed5f4bb9a62ed975d5cf98b01bfcb7a4e966dfd04efb50e235" Dec 05 10:40:04 crc kubenswrapper[4796]: I1205 10:40:04.170963 4796 scope.go:117] "RemoveContainer" containerID="9459374a1c1e1834cf18e80843c9b8b0358e6121b8fc53b3ddc73782af0eedd6" Dec 05 10:40:04 crc kubenswrapper[4796]: I1205 10:40:04.182436 4796 scope.go:117] "RemoveContainer" containerID="913b352ba684976e51684700d903c3e6c36fdf8dda643694dd1990b67694bb69" Dec 05 10:40:04 crc kubenswrapper[4796]: E1205 10:40:04.182692 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"913b352ba684976e51684700d903c3e6c36fdf8dda643694dd1990b67694bb69\": container with ID starting with 913b352ba684976e51684700d903c3e6c36fdf8dda643694dd1990b67694bb69 not found: ID does not exist" containerID="913b352ba684976e51684700d903c3e6c36fdf8dda643694dd1990b67694bb69" Dec 05 10:40:04 crc kubenswrapper[4796]: I1205 10:40:04.182720 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"913b352ba684976e51684700d903c3e6c36fdf8dda643694dd1990b67694bb69"} err="failed to get container status \"913b352ba684976e51684700d903c3e6c36fdf8dda643694dd1990b67694bb69\": rpc error: code = NotFound desc = could not find container \"913b352ba684976e51684700d903c3e6c36fdf8dda643694dd1990b67694bb69\": container with ID starting with 913b352ba684976e51684700d903c3e6c36fdf8dda643694dd1990b67694bb69 not found: ID does not exist" Dec 05 10:40:04 crc kubenswrapper[4796]: I1205 10:40:04.182738 4796 scope.go:117] "RemoveContainer" containerID="eea7ad31cef248ed5f4bb9a62ed975d5cf98b01bfcb7a4e966dfd04efb50e235" Dec 05 10:40:04 crc kubenswrapper[4796]: E1205 10:40:04.182968 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eea7ad31cef248ed5f4bb9a62ed975d5cf98b01bfcb7a4e966dfd04efb50e235\": container with ID starting with eea7ad31cef248ed5f4bb9a62ed975d5cf98b01bfcb7a4e966dfd04efb50e235 not found: ID does not exist" containerID="eea7ad31cef248ed5f4bb9a62ed975d5cf98b01bfcb7a4e966dfd04efb50e235" Dec 05 10:40:04 crc kubenswrapper[4796]: I1205 10:40:04.183004 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eea7ad31cef248ed5f4bb9a62ed975d5cf98b01bfcb7a4e966dfd04efb50e235"} err="failed to get container status \"eea7ad31cef248ed5f4bb9a62ed975d5cf98b01bfcb7a4e966dfd04efb50e235\": rpc error: code = NotFound desc = could not find container \"eea7ad31cef248ed5f4bb9a62ed975d5cf98b01bfcb7a4e966dfd04efb50e235\": container with ID starting with eea7ad31cef248ed5f4bb9a62ed975d5cf98b01bfcb7a4e966dfd04efb50e235 not found: ID does not exist" Dec 05 10:40:04 crc kubenswrapper[4796]: I1205 10:40:04.183027 4796 scope.go:117] "RemoveContainer" containerID="9459374a1c1e1834cf18e80843c9b8b0358e6121b8fc53b3ddc73782af0eedd6" Dec 05 10:40:04 crc kubenswrapper[4796]: E1205 10:40:04.183260 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9459374a1c1e1834cf18e80843c9b8b0358e6121b8fc53b3ddc73782af0eedd6\": container with ID starting with 9459374a1c1e1834cf18e80843c9b8b0358e6121b8fc53b3ddc73782af0eedd6 not found: ID does not exist" containerID="9459374a1c1e1834cf18e80843c9b8b0358e6121b8fc53b3ddc73782af0eedd6" Dec 05 10:40:04 crc kubenswrapper[4796]: I1205 10:40:04.183283 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9459374a1c1e1834cf18e80843c9b8b0358e6121b8fc53b3ddc73782af0eedd6"} err="failed to get container status \"9459374a1c1e1834cf18e80843c9b8b0358e6121b8fc53b3ddc73782af0eedd6\": rpc error: code = NotFound desc = could not find container \"9459374a1c1e1834cf18e80843c9b8b0358e6121b8fc53b3ddc73782af0eedd6\": container with ID starting with 9459374a1c1e1834cf18e80843c9b8b0358e6121b8fc53b3ddc73782af0eedd6 not found: ID does not exist" Dec 05 10:40:05 crc kubenswrapper[4796]: I1205 10:40:05.177309 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:40:05 crc kubenswrapper[4796]: I1205 10:40:05.177356 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:40:05 crc kubenswrapper[4796]: I1205 10:40:05.177390 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 10:40:05 crc kubenswrapper[4796]: I1205 10:40:05.177753 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"11955f72d8cab8d1dbc9ce29d0048cc5cd67ff89aba57e0b67af6a83dc91a6d7"} pod="openshift-machine-config-operator/machine-config-daemon-9pllw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 10:40:05 crc kubenswrapper[4796]: I1205 10:40:05.177804 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" containerID="cri-o://11955f72d8cab8d1dbc9ce29d0048cc5cd67ff89aba57e0b67af6a83dc91a6d7" gracePeriod=600 Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.037050 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76c05234-07b8-4149-8698-9fc1d51c586b" path="/var/lib/kubelet/pods/76c05234-07b8-4149-8698-9fc1d51c586b/volumes" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.145299 4796 generic.go:334] "Generic (PLEG): container finished" podID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerID="11955f72d8cab8d1dbc9ce29d0048cc5cd67ff89aba57e0b67af6a83dc91a6d7" exitCode=0 Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.145368 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerDied","Data":"11955f72d8cab8d1dbc9ce29d0048cc5cd67ff89aba57e0b67af6a83dc91a6d7"} Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.145650 4796 scope.go:117] "RemoveContainer" containerID="5f20b7eecdefbb86282fce61808a4c3f117c21f24b1bad20aefe872aef8a2e8b" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.567884 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cd484bb89-pk8qm"] Dec 05 10:40:06 crc kubenswrapper[4796]: E1205 10:40:06.568346 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76c05234-07b8-4149-8698-9fc1d51c586b" containerName="registry-server" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.568358 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="76c05234-07b8-4149-8698-9fc1d51c586b" containerName="registry-server" Dec 05 10:40:06 crc kubenswrapper[4796]: E1205 10:40:06.568386 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76c05234-07b8-4149-8698-9fc1d51c586b" containerName="extract-utilities" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.568393 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="76c05234-07b8-4149-8698-9fc1d51c586b" containerName="extract-utilities" Dec 05 10:40:06 crc kubenswrapper[4796]: E1205 10:40:06.568414 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76c05234-07b8-4149-8698-9fc1d51c586b" containerName="extract-content" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.568419 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="76c05234-07b8-4149-8698-9fc1d51c586b" containerName="extract-content" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.568542 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="76c05234-07b8-4149-8698-9fc1d51c586b" containerName="registry-server" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.569168 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd484bb89-pk8qm" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.574970 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.575127 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.575244 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.575359 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-h4hwz" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.580205 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cd484bb89-pk8qm"] Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.618353 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-567c455747-m98ht"] Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.619369 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567c455747-m98ht" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.624019 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.625623 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-567c455747-m98ht"] Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.692634 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj2s7\" (UniqueName: \"kubernetes.io/projected/9b711e05-6b25-48ff-9228-abaab40dae1e-kube-api-access-pj2s7\") pod \"dnsmasq-dns-5cd484bb89-pk8qm\" (UID: \"9b711e05-6b25-48ff-9228-abaab40dae1e\") " pod="openstack/dnsmasq-dns-5cd484bb89-pk8qm" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.692904 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b711e05-6b25-48ff-9228-abaab40dae1e-config\") pod \"dnsmasq-dns-5cd484bb89-pk8qm\" (UID: \"9b711e05-6b25-48ff-9228-abaab40dae1e\") " pod="openstack/dnsmasq-dns-5cd484bb89-pk8qm" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.794442 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b711e05-6b25-48ff-9228-abaab40dae1e-config\") pod \"dnsmasq-dns-5cd484bb89-pk8qm\" (UID: \"9b711e05-6b25-48ff-9228-abaab40dae1e\") " pod="openstack/dnsmasq-dns-5cd484bb89-pk8qm" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.794734 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65453db8-d128-4c96-801f-78ae59451fe5-dns-svc\") pod \"dnsmasq-dns-567c455747-m98ht\" (UID: \"65453db8-d128-4c96-801f-78ae59451fe5\") " pod="openstack/dnsmasq-dns-567c455747-m98ht" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.794975 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzllb\" (UniqueName: \"kubernetes.io/projected/65453db8-d128-4c96-801f-78ae59451fe5-kube-api-access-jzllb\") pod \"dnsmasq-dns-567c455747-m98ht\" (UID: \"65453db8-d128-4c96-801f-78ae59451fe5\") " pod="openstack/dnsmasq-dns-567c455747-m98ht" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.795099 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj2s7\" (UniqueName: \"kubernetes.io/projected/9b711e05-6b25-48ff-9228-abaab40dae1e-kube-api-access-pj2s7\") pod \"dnsmasq-dns-5cd484bb89-pk8qm\" (UID: \"9b711e05-6b25-48ff-9228-abaab40dae1e\") " pod="openstack/dnsmasq-dns-5cd484bb89-pk8qm" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.795190 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65453db8-d128-4c96-801f-78ae59451fe5-config\") pod \"dnsmasq-dns-567c455747-m98ht\" (UID: \"65453db8-d128-4c96-801f-78ae59451fe5\") " pod="openstack/dnsmasq-dns-567c455747-m98ht" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.795605 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b711e05-6b25-48ff-9228-abaab40dae1e-config\") pod \"dnsmasq-dns-5cd484bb89-pk8qm\" (UID: \"9b711e05-6b25-48ff-9228-abaab40dae1e\") " pod="openstack/dnsmasq-dns-5cd484bb89-pk8qm" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.809914 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj2s7\" (UniqueName: \"kubernetes.io/projected/9b711e05-6b25-48ff-9228-abaab40dae1e-kube-api-access-pj2s7\") pod \"dnsmasq-dns-5cd484bb89-pk8qm\" (UID: \"9b711e05-6b25-48ff-9228-abaab40dae1e\") " pod="openstack/dnsmasq-dns-5cd484bb89-pk8qm" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.896730 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65453db8-d128-4c96-801f-78ae59451fe5-config\") pod \"dnsmasq-dns-567c455747-m98ht\" (UID: \"65453db8-d128-4c96-801f-78ae59451fe5\") " pod="openstack/dnsmasq-dns-567c455747-m98ht" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.896785 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65453db8-d128-4c96-801f-78ae59451fe5-dns-svc\") pod \"dnsmasq-dns-567c455747-m98ht\" (UID: \"65453db8-d128-4c96-801f-78ae59451fe5\") " pod="openstack/dnsmasq-dns-567c455747-m98ht" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.896824 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzllb\" (UniqueName: \"kubernetes.io/projected/65453db8-d128-4c96-801f-78ae59451fe5-kube-api-access-jzllb\") pod \"dnsmasq-dns-567c455747-m98ht\" (UID: \"65453db8-d128-4c96-801f-78ae59451fe5\") " pod="openstack/dnsmasq-dns-567c455747-m98ht" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.897562 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65453db8-d128-4c96-801f-78ae59451fe5-config\") pod \"dnsmasq-dns-567c455747-m98ht\" (UID: \"65453db8-d128-4c96-801f-78ae59451fe5\") " pod="openstack/dnsmasq-dns-567c455747-m98ht" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.897604 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65453db8-d128-4c96-801f-78ae59451fe5-dns-svc\") pod \"dnsmasq-dns-567c455747-m98ht\" (UID: \"65453db8-d128-4c96-801f-78ae59451fe5\") " pod="openstack/dnsmasq-dns-567c455747-m98ht" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.911736 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzllb\" (UniqueName: \"kubernetes.io/projected/65453db8-d128-4c96-801f-78ae59451fe5-kube-api-access-jzllb\") pod \"dnsmasq-dns-567c455747-m98ht\" (UID: \"65453db8-d128-4c96-801f-78ae59451fe5\") " pod="openstack/dnsmasq-dns-567c455747-m98ht" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.914585 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd484bb89-pk8qm" Dec 05 10:40:06 crc kubenswrapper[4796]: I1205 10:40:06.945281 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567c455747-m98ht" Dec 05 10:40:07 crc kubenswrapper[4796]: I1205 10:40:07.152907 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerStarted","Data":"43e07b991ca33a9b26481e28d45698e5b0116b0edd51039d2f5f22853ca65e61"} Dec 05 10:40:07 crc kubenswrapper[4796]: I1205 10:40:07.272258 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cd484bb89-pk8qm"] Dec 05 10:40:07 crc kubenswrapper[4796]: W1205 10:40:07.273913 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b711e05_6b25_48ff_9228_abaab40dae1e.slice/crio-ae857ff9625bb575645aeab68eae90796da3b21008cbef8e51232cfaaf8cd44c WatchSource:0}: Error finding container ae857ff9625bb575645aeab68eae90796da3b21008cbef8e51232cfaaf8cd44c: Status 404 returned error can't find the container with id ae857ff9625bb575645aeab68eae90796da3b21008cbef8e51232cfaaf8cd44c Dec 05 10:40:07 crc kubenswrapper[4796]: I1205 10:40:07.327166 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-567c455747-m98ht"] Dec 05 10:40:07 crc kubenswrapper[4796]: W1205 10:40:07.328422 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65453db8_d128_4c96_801f_78ae59451fe5.slice/crio-5c3e12d99e3807322be3f51b63fb1267afa5880993a296db090f003d70a54298 WatchSource:0}: Error finding container 5c3e12d99e3807322be3f51b63fb1267afa5880993a296db090f003d70a54298: Status 404 returned error can't find the container with id 5c3e12d99e3807322be3f51b63fb1267afa5880993a296db090f003d70a54298 Dec 05 10:40:08 crc kubenswrapper[4796]: I1205 10:40:08.158576 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd484bb89-pk8qm" event={"ID":"9b711e05-6b25-48ff-9228-abaab40dae1e","Type":"ContainerStarted","Data":"ae857ff9625bb575645aeab68eae90796da3b21008cbef8e51232cfaaf8cd44c"} Dec 05 10:40:08 crc kubenswrapper[4796]: I1205 10:40:08.159500 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567c455747-m98ht" event={"ID":"65453db8-d128-4c96-801f-78ae59451fe5","Type":"ContainerStarted","Data":"5c3e12d99e3807322be3f51b63fb1267afa5880993a296db090f003d70a54298"} Dec 05 10:40:09 crc kubenswrapper[4796]: I1205 10:40:09.669209 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-567c455747-m98ht"] Dec 05 10:40:09 crc kubenswrapper[4796]: I1205 10:40:09.686205 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bc4b48fc9-h6zwn"] Dec 05 10:40:09 crc kubenswrapper[4796]: I1205 10:40:09.687926 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc4b48fc9-h6zwn" Dec 05 10:40:09 crc kubenswrapper[4796]: I1205 10:40:09.697318 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bc4b48fc9-h6zwn"] Dec 05 10:40:09 crc kubenswrapper[4796]: I1205 10:40:09.732453 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d89aa469-fd41-4537-935b-5a03efe42ce8-config\") pod \"dnsmasq-dns-bc4b48fc9-h6zwn\" (UID: \"d89aa469-fd41-4537-935b-5a03efe42ce8\") " pod="openstack/dnsmasq-dns-bc4b48fc9-h6zwn" Dec 05 10:40:09 crc kubenswrapper[4796]: I1205 10:40:09.732513 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d89aa469-fd41-4537-935b-5a03efe42ce8-dns-svc\") pod \"dnsmasq-dns-bc4b48fc9-h6zwn\" (UID: \"d89aa469-fd41-4537-935b-5a03efe42ce8\") " pod="openstack/dnsmasq-dns-bc4b48fc9-h6zwn" Dec 05 10:40:09 crc kubenswrapper[4796]: I1205 10:40:09.732606 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n22b\" (UniqueName: \"kubernetes.io/projected/d89aa469-fd41-4537-935b-5a03efe42ce8-kube-api-access-9n22b\") pod \"dnsmasq-dns-bc4b48fc9-h6zwn\" (UID: \"d89aa469-fd41-4537-935b-5a03efe42ce8\") " pod="openstack/dnsmasq-dns-bc4b48fc9-h6zwn" Dec 05 10:40:09 crc kubenswrapper[4796]: I1205 10:40:09.833237 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n22b\" (UniqueName: \"kubernetes.io/projected/d89aa469-fd41-4537-935b-5a03efe42ce8-kube-api-access-9n22b\") pod \"dnsmasq-dns-bc4b48fc9-h6zwn\" (UID: \"d89aa469-fd41-4537-935b-5a03efe42ce8\") " pod="openstack/dnsmasq-dns-bc4b48fc9-h6zwn" Dec 05 10:40:09 crc kubenswrapper[4796]: I1205 10:40:09.833330 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d89aa469-fd41-4537-935b-5a03efe42ce8-config\") pod \"dnsmasq-dns-bc4b48fc9-h6zwn\" (UID: \"d89aa469-fd41-4537-935b-5a03efe42ce8\") " pod="openstack/dnsmasq-dns-bc4b48fc9-h6zwn" Dec 05 10:40:09 crc kubenswrapper[4796]: I1205 10:40:09.833366 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d89aa469-fd41-4537-935b-5a03efe42ce8-dns-svc\") pod \"dnsmasq-dns-bc4b48fc9-h6zwn\" (UID: \"d89aa469-fd41-4537-935b-5a03efe42ce8\") " pod="openstack/dnsmasq-dns-bc4b48fc9-h6zwn" Dec 05 10:40:09 crc kubenswrapper[4796]: I1205 10:40:09.834405 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d89aa469-fd41-4537-935b-5a03efe42ce8-dns-svc\") pod \"dnsmasq-dns-bc4b48fc9-h6zwn\" (UID: \"d89aa469-fd41-4537-935b-5a03efe42ce8\") " pod="openstack/dnsmasq-dns-bc4b48fc9-h6zwn" Dec 05 10:40:09 crc kubenswrapper[4796]: I1205 10:40:09.834403 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d89aa469-fd41-4537-935b-5a03efe42ce8-config\") pod \"dnsmasq-dns-bc4b48fc9-h6zwn\" (UID: \"d89aa469-fd41-4537-935b-5a03efe42ce8\") " pod="openstack/dnsmasq-dns-bc4b48fc9-h6zwn" Dec 05 10:40:09 crc kubenswrapper[4796]: I1205 10:40:09.853317 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n22b\" (UniqueName: \"kubernetes.io/projected/d89aa469-fd41-4537-935b-5a03efe42ce8-kube-api-access-9n22b\") pod \"dnsmasq-dns-bc4b48fc9-h6zwn\" (UID: \"d89aa469-fd41-4537-935b-5a03efe42ce8\") " pod="openstack/dnsmasq-dns-bc4b48fc9-h6zwn" Dec 05 10:40:09 crc kubenswrapper[4796]: I1205 10:40:09.924429 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cd484bb89-pk8qm"] Dec 05 10:40:09 crc kubenswrapper[4796]: I1205 10:40:09.936067 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb666b895-qsrsm"] Dec 05 10:40:09 crc kubenswrapper[4796]: I1205 10:40:09.937274 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb666b895-qsrsm" Dec 05 10:40:09 crc kubenswrapper[4796]: I1205 10:40:09.946988 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb666b895-qsrsm"] Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.008581 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc4b48fc9-h6zwn" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.136543 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsc7q\" (UniqueName: \"kubernetes.io/projected/a7b55fe6-1d8f-4945-bba8-3c9c7bff3090-kube-api-access-vsc7q\") pod \"dnsmasq-dns-cb666b895-qsrsm\" (UID: \"a7b55fe6-1d8f-4945-bba8-3c9c7bff3090\") " pod="openstack/dnsmasq-dns-cb666b895-qsrsm" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.136599 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7b55fe6-1d8f-4945-bba8-3c9c7bff3090-config\") pod \"dnsmasq-dns-cb666b895-qsrsm\" (UID: \"a7b55fe6-1d8f-4945-bba8-3c9c7bff3090\") " pod="openstack/dnsmasq-dns-cb666b895-qsrsm" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.136627 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7b55fe6-1d8f-4945-bba8-3c9c7bff3090-dns-svc\") pod \"dnsmasq-dns-cb666b895-qsrsm\" (UID: \"a7b55fe6-1d8f-4945-bba8-3c9c7bff3090\") " pod="openstack/dnsmasq-dns-cb666b895-qsrsm" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.238478 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsc7q\" (UniqueName: \"kubernetes.io/projected/a7b55fe6-1d8f-4945-bba8-3c9c7bff3090-kube-api-access-vsc7q\") pod \"dnsmasq-dns-cb666b895-qsrsm\" (UID: \"a7b55fe6-1d8f-4945-bba8-3c9c7bff3090\") " pod="openstack/dnsmasq-dns-cb666b895-qsrsm" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.238891 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7b55fe6-1d8f-4945-bba8-3c9c7bff3090-config\") pod \"dnsmasq-dns-cb666b895-qsrsm\" (UID: \"a7b55fe6-1d8f-4945-bba8-3c9c7bff3090\") " pod="openstack/dnsmasq-dns-cb666b895-qsrsm" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.238922 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7b55fe6-1d8f-4945-bba8-3c9c7bff3090-dns-svc\") pod \"dnsmasq-dns-cb666b895-qsrsm\" (UID: \"a7b55fe6-1d8f-4945-bba8-3c9c7bff3090\") " pod="openstack/dnsmasq-dns-cb666b895-qsrsm" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.239884 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7b55fe6-1d8f-4945-bba8-3c9c7bff3090-config\") pod \"dnsmasq-dns-cb666b895-qsrsm\" (UID: \"a7b55fe6-1d8f-4945-bba8-3c9c7bff3090\") " pod="openstack/dnsmasq-dns-cb666b895-qsrsm" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.248174 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7b55fe6-1d8f-4945-bba8-3c9c7bff3090-dns-svc\") pod \"dnsmasq-dns-cb666b895-qsrsm\" (UID: \"a7b55fe6-1d8f-4945-bba8-3c9c7bff3090\") " pod="openstack/dnsmasq-dns-cb666b895-qsrsm" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.257560 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsc7q\" (UniqueName: \"kubernetes.io/projected/a7b55fe6-1d8f-4945-bba8-3c9c7bff3090-kube-api-access-vsc7q\") pod \"dnsmasq-dns-cb666b895-qsrsm\" (UID: \"a7b55fe6-1d8f-4945-bba8-3c9c7bff3090\") " pod="openstack/dnsmasq-dns-cb666b895-qsrsm" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.452913 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bc4b48fc9-h6zwn"] Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.556306 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb666b895-qsrsm" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.827637 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.830863 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.833013 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.833230 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.833310 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.833831 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.834434 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.835115 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jzl2c" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.835221 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.835269 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.939997 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb666b895-qsrsm"] Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.949223 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1a61d456-9eea-447f-b576-77473222d108-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.949287 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1a61d456-9eea-447f-b576-77473222d108-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.949344 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1a61d456-9eea-447f-b576-77473222d108-config-data\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.949388 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.949425 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.949441 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1a61d456-9eea-447f-b576-77473222d108-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.949478 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1a61d456-9eea-447f-b576-77473222d108-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.949514 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.949573 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f68jt\" (UniqueName: \"kubernetes.io/projected/1a61d456-9eea-447f-b576-77473222d108-kube-api-access-f68jt\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.949628 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:10 crc kubenswrapper[4796]: I1205 10:40:10.949652 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.050632 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1a61d456-9eea-447f-b576-77473222d108-config-data\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.051008 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.051037 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1a61d456-9eea-447f-b576-77473222d108-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.051056 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.051090 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1a61d456-9eea-447f-b576-77473222d108-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.051117 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.051141 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f68jt\" (UniqueName: \"kubernetes.io/projected/1a61d456-9eea-447f-b576-77473222d108-kube-api-access-f68jt\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.051170 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.051182 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.051252 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1a61d456-9eea-447f-b576-77473222d108-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.051270 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1a61d456-9eea-447f-b576-77473222d108-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.051483 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.052370 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1a61d456-9eea-447f-b576-77473222d108-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.052658 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1a61d456-9eea-447f-b576-77473222d108-server-conf\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.056585 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.056837 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.057446 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.057749 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.059167 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1a61d456-9eea-447f-b576-77473222d108-config-data\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.059503 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1a61d456-9eea-447f-b576-77473222d108-pod-info\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.061864 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1a61d456-9eea-447f-b576-77473222d108-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.065543 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f68jt\" (UniqueName: \"kubernetes.io/projected/1a61d456-9eea-447f-b576-77473222d108-kube-api-access-f68jt\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.095757 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.111043 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.112375 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.113882 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-nq4sp" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.114470 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.115199 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.115217 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.115748 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.116496 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.116630 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.117342 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.174037 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.182923 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bc4b48fc9-h6zwn" event={"ID":"d89aa469-fd41-4537-935b-5a03efe42ce8","Type":"ContainerStarted","Data":"a7ed644e42d27c8c6cb05e57a930870cc63ddc8f619ff9328e13958557ac5076"} Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.254760 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f735325f-6e38-45a2-a5bd-9ad19c40b36f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.254802 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.254852 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f735325f-6e38-45a2-a5bd-9ad19c40b36f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.254885 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc7gp\" (UniqueName: \"kubernetes.io/projected/f735325f-6e38-45a2-a5bd-9ad19c40b36f-kube-api-access-wc7gp\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.254909 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f735325f-6e38-45a2-a5bd-9ad19c40b36f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.254938 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.254960 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.254979 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.254995 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.255018 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f735325f-6e38-45a2-a5bd-9ad19c40b36f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.255054 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f735325f-6e38-45a2-a5bd-9ad19c40b36f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.356876 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f735325f-6e38-45a2-a5bd-9ad19c40b36f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.356947 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.357020 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f735325f-6e38-45a2-a5bd-9ad19c40b36f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.357058 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f735325f-6e38-45a2-a5bd-9ad19c40b36f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.357075 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc7gp\" (UniqueName: \"kubernetes.io/projected/f735325f-6e38-45a2-a5bd-9ad19c40b36f-kube-api-access-wc7gp\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.357110 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.357131 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.357147 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.357161 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.357182 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f735325f-6e38-45a2-a5bd-9ad19c40b36f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.357216 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f735325f-6e38-45a2-a5bd-9ad19c40b36f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.357232 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.358363 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f735325f-6e38-45a2-a5bd-9ad19c40b36f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.358412 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.359069 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f735325f-6e38-45a2-a5bd-9ad19c40b36f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.359510 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f735325f-6e38-45a2-a5bd-9ad19c40b36f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.359766 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.361505 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f735325f-6e38-45a2-a5bd-9ad19c40b36f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.362186 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.363196 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.371139 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.373457 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc7gp\" (UniqueName: \"kubernetes.io/projected/f735325f-6e38-45a2-a5bd-9ad19c40b36f-kube-api-access-wc7gp\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.377050 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f735325f-6e38-45a2-a5bd-9ad19c40b36f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:11 crc kubenswrapper[4796]: I1205 10:40:11.435169 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:40:12 crc kubenswrapper[4796]: W1205 10:40:12.520502 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7b55fe6_1d8f_4945_bba8_3c9c7bff3090.slice/crio-325c93202dfd9162538259730a05b25f95072b441585a978678261e4cc13b4c4 WatchSource:0}: Error finding container 325c93202dfd9162538259730a05b25f95072b441585a978678261e4cc13b4c4: Status 404 returned error can't find the container with id 325c93202dfd9162538259730a05b25f95072b441585a978678261e4cc13b4c4 Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.541995 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.545641 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.548029 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-jcqh9" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.548193 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.548318 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.548392 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.548445 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.548852 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.554144 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.672423 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/e649ad8d-0ed0-495c-9abc-7220d750f060-secrets\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.672922 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e649ad8d-0ed0-495c-9abc-7220d750f060-config-data-default\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.672961 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e649ad8d-0ed0-495c-9abc-7220d750f060-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.672998 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.673465 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e649ad8d-0ed0-495c-9abc-7220d750f060-kolla-config\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.673552 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e649ad8d-0ed0-495c-9abc-7220d750f060-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.673800 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e649ad8d-0ed0-495c-9abc-7220d750f060-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.673882 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbwv8\" (UniqueName: \"kubernetes.io/projected/e649ad8d-0ed0-495c-9abc-7220d750f060-kube-api-access-dbwv8\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.674040 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e649ad8d-0ed0-495c-9abc-7220d750f060-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.777033 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e649ad8d-0ed0-495c-9abc-7220d750f060-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.777102 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbwv8\" (UniqueName: \"kubernetes.io/projected/e649ad8d-0ed0-495c-9abc-7220d750f060-kube-api-access-dbwv8\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.777146 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e649ad8d-0ed0-495c-9abc-7220d750f060-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.777170 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/e649ad8d-0ed0-495c-9abc-7220d750f060-secrets\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.777207 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e649ad8d-0ed0-495c-9abc-7220d750f060-config-data-default\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.777228 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e649ad8d-0ed0-495c-9abc-7220d750f060-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.777253 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.777298 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e649ad8d-0ed0-495c-9abc-7220d750f060-kolla-config\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.777315 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e649ad8d-0ed0-495c-9abc-7220d750f060-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.777424 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e649ad8d-0ed0-495c-9abc-7220d750f060-config-data-generated\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.778339 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e649ad8d-0ed0-495c-9abc-7220d750f060-config-data-default\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.778377 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.778715 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e649ad8d-0ed0-495c-9abc-7220d750f060-operator-scripts\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.779329 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e649ad8d-0ed0-495c-9abc-7220d750f060-kolla-config\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.786871 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/e649ad8d-0ed0-495c-9abc-7220d750f060-secrets\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.786889 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e649ad8d-0ed0-495c-9abc-7220d750f060-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.786931 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e649ad8d-0ed0-495c-9abc-7220d750f060-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.799368 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.800172 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbwv8\" (UniqueName: \"kubernetes.io/projected/e649ad8d-0ed0-495c-9abc-7220d750f060-kube-api-access-dbwv8\") pod \"openstack-galera-0\" (UID: \"e649ad8d-0ed0-495c-9abc-7220d750f060\") " pod="openstack/openstack-galera-0" Dec 05 10:40:12 crc kubenswrapper[4796]: I1205 10:40:12.867396 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 05 10:40:13 crc kubenswrapper[4796]: I1205 10:40:13.200876 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb666b895-qsrsm" event={"ID":"a7b55fe6-1d8f-4945-bba8-3c9c7bff3090","Type":"ContainerStarted","Data":"325c93202dfd9162538259730a05b25f95072b441585a978678261e4cc13b4c4"} Dec 05 10:40:13 crc kubenswrapper[4796]: I1205 10:40:13.671280 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dnpfk"] Dec 05 10:40:13 crc kubenswrapper[4796]: I1205 10:40:13.673012 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dnpfk" Dec 05 10:40:13 crc kubenswrapper[4796]: I1205 10:40:13.681111 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dnpfk"] Dec 05 10:40:13 crc kubenswrapper[4796]: I1205 10:40:13.796031 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c56d752f-ca0a-4be7-b05b-381b1a5fdfc6-utilities\") pod \"redhat-marketplace-dnpfk\" (UID: \"c56d752f-ca0a-4be7-b05b-381b1a5fdfc6\") " pod="openshift-marketplace/redhat-marketplace-dnpfk" Dec 05 10:40:13 crc kubenswrapper[4796]: I1205 10:40:13.796331 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9dr8\" (UniqueName: \"kubernetes.io/projected/c56d752f-ca0a-4be7-b05b-381b1a5fdfc6-kube-api-access-w9dr8\") pod \"redhat-marketplace-dnpfk\" (UID: \"c56d752f-ca0a-4be7-b05b-381b1a5fdfc6\") " pod="openshift-marketplace/redhat-marketplace-dnpfk" Dec 05 10:40:13 crc kubenswrapper[4796]: I1205 10:40:13.796395 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c56d752f-ca0a-4be7-b05b-381b1a5fdfc6-catalog-content\") pod \"redhat-marketplace-dnpfk\" (UID: \"c56d752f-ca0a-4be7-b05b-381b1a5fdfc6\") " pod="openshift-marketplace/redhat-marketplace-dnpfk" Dec 05 10:40:13 crc kubenswrapper[4796]: I1205 10:40:13.897464 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c56d752f-ca0a-4be7-b05b-381b1a5fdfc6-catalog-content\") pod \"redhat-marketplace-dnpfk\" (UID: \"c56d752f-ca0a-4be7-b05b-381b1a5fdfc6\") " pod="openshift-marketplace/redhat-marketplace-dnpfk" Dec 05 10:40:13 crc kubenswrapper[4796]: I1205 10:40:13.897559 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c56d752f-ca0a-4be7-b05b-381b1a5fdfc6-utilities\") pod \"redhat-marketplace-dnpfk\" (UID: \"c56d752f-ca0a-4be7-b05b-381b1a5fdfc6\") " pod="openshift-marketplace/redhat-marketplace-dnpfk" Dec 05 10:40:13 crc kubenswrapper[4796]: I1205 10:40:13.897595 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9dr8\" (UniqueName: \"kubernetes.io/projected/c56d752f-ca0a-4be7-b05b-381b1a5fdfc6-kube-api-access-w9dr8\") pod \"redhat-marketplace-dnpfk\" (UID: \"c56d752f-ca0a-4be7-b05b-381b1a5fdfc6\") " pod="openshift-marketplace/redhat-marketplace-dnpfk" Dec 05 10:40:13 crc kubenswrapper[4796]: I1205 10:40:13.898183 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c56d752f-ca0a-4be7-b05b-381b1a5fdfc6-utilities\") pod \"redhat-marketplace-dnpfk\" (UID: \"c56d752f-ca0a-4be7-b05b-381b1a5fdfc6\") " pod="openshift-marketplace/redhat-marketplace-dnpfk" Dec 05 10:40:13 crc kubenswrapper[4796]: I1205 10:40:13.898226 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c56d752f-ca0a-4be7-b05b-381b1a5fdfc6-catalog-content\") pod \"redhat-marketplace-dnpfk\" (UID: \"c56d752f-ca0a-4be7-b05b-381b1a5fdfc6\") " pod="openshift-marketplace/redhat-marketplace-dnpfk" Dec 05 10:40:13 crc kubenswrapper[4796]: I1205 10:40:13.914016 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9dr8\" (UniqueName: \"kubernetes.io/projected/c56d752f-ca0a-4be7-b05b-381b1a5fdfc6-kube-api-access-w9dr8\") pod \"redhat-marketplace-dnpfk\" (UID: \"c56d752f-ca0a-4be7-b05b-381b1a5fdfc6\") " pod="openshift-marketplace/redhat-marketplace-dnpfk" Dec 05 10:40:13 crc kubenswrapper[4796]: I1205 10:40:13.945614 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 10:40:13 crc kubenswrapper[4796]: I1205 10:40:13.946611 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:13 crc kubenswrapper[4796]: I1205 10:40:13.950214 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 05 10:40:13 crc kubenswrapper[4796]: I1205 10:40:13.951368 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-wcftf" Dec 05 10:40:13 crc kubenswrapper[4796]: I1205 10:40:13.951675 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 05 10:40:13 crc kubenswrapper[4796]: I1205 10:40:13.952189 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 05 10:40:13 crc kubenswrapper[4796]: I1205 10:40:13.961862 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 10:40:13 crc kubenswrapper[4796]: I1205 10:40:13.999288 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dnpfk" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.101393 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2pl2\" (UniqueName: \"kubernetes.io/projected/0deffc65-bff4-419f-aa12-2c17432112a3-kube-api-access-z2pl2\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.101435 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/0deffc65-bff4-419f-aa12-2c17432112a3-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.101467 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0deffc65-bff4-419f-aa12-2c17432112a3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.101491 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.101509 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0deffc65-bff4-419f-aa12-2c17432112a3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.101538 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0deffc65-bff4-419f-aa12-2c17432112a3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.101693 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0deffc65-bff4-419f-aa12-2c17432112a3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.101833 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0deffc65-bff4-419f-aa12-2c17432112a3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.101884 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0deffc65-bff4-419f-aa12-2c17432112a3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.203781 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.203824 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0deffc65-bff4-419f-aa12-2c17432112a3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.203864 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0deffc65-bff4-419f-aa12-2c17432112a3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.203893 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0deffc65-bff4-419f-aa12-2c17432112a3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.203932 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0deffc65-bff4-419f-aa12-2c17432112a3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.203952 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0deffc65-bff4-419f-aa12-2c17432112a3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.204003 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2pl2\" (UniqueName: \"kubernetes.io/projected/0deffc65-bff4-419f-aa12-2c17432112a3-kube-api-access-z2pl2\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.204036 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/0deffc65-bff4-419f-aa12-2c17432112a3-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.204072 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0deffc65-bff4-419f-aa12-2c17432112a3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.204136 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.204426 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0deffc65-bff4-419f-aa12-2c17432112a3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.204994 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0deffc65-bff4-419f-aa12-2c17432112a3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.206048 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0deffc65-bff4-419f-aa12-2c17432112a3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.206936 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0deffc65-bff4-419f-aa12-2c17432112a3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.207137 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0deffc65-bff4-419f-aa12-2c17432112a3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.212322 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0deffc65-bff4-419f-aa12-2c17432112a3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.218115 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/0deffc65-bff4-419f-aa12-2c17432112a3-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.229300 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2pl2\" (UniqueName: \"kubernetes.io/projected/0deffc65-bff4-419f-aa12-2c17432112a3-kube-api-access-z2pl2\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.238097 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0deffc65-bff4-419f-aa12-2c17432112a3\") " pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.252040 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.253277 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.257967 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.258228 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-8jz6q" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.258364 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.265216 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.266742 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.407345 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c41250df-9d98-441a-a88f-6a17034b8d31-config-data\") pod \"memcached-0\" (UID: \"c41250df-9d98-441a-a88f-6a17034b8d31\") " pod="openstack/memcached-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.407476 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c41250df-9d98-441a-a88f-6a17034b8d31-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c41250df-9d98-441a-a88f-6a17034b8d31\") " pod="openstack/memcached-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.407543 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78t26\" (UniqueName: \"kubernetes.io/projected/c41250df-9d98-441a-a88f-6a17034b8d31-kube-api-access-78t26\") pod \"memcached-0\" (UID: \"c41250df-9d98-441a-a88f-6a17034b8d31\") " pod="openstack/memcached-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.407576 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41250df-9d98-441a-a88f-6a17034b8d31-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c41250df-9d98-441a-a88f-6a17034b8d31\") " pod="openstack/memcached-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.407624 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c41250df-9d98-441a-a88f-6a17034b8d31-kolla-config\") pod \"memcached-0\" (UID: \"c41250df-9d98-441a-a88f-6a17034b8d31\") " pod="openstack/memcached-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.508975 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c41250df-9d98-441a-a88f-6a17034b8d31-config-data\") pod \"memcached-0\" (UID: \"c41250df-9d98-441a-a88f-6a17034b8d31\") " pod="openstack/memcached-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.509048 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c41250df-9d98-441a-a88f-6a17034b8d31-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c41250df-9d98-441a-a88f-6a17034b8d31\") " pod="openstack/memcached-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.509094 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78t26\" (UniqueName: \"kubernetes.io/projected/c41250df-9d98-441a-a88f-6a17034b8d31-kube-api-access-78t26\") pod \"memcached-0\" (UID: \"c41250df-9d98-441a-a88f-6a17034b8d31\") " pod="openstack/memcached-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.509118 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41250df-9d98-441a-a88f-6a17034b8d31-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c41250df-9d98-441a-a88f-6a17034b8d31\") " pod="openstack/memcached-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.509148 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c41250df-9d98-441a-a88f-6a17034b8d31-kolla-config\") pod \"memcached-0\" (UID: \"c41250df-9d98-441a-a88f-6a17034b8d31\") " pod="openstack/memcached-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.509860 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c41250df-9d98-441a-a88f-6a17034b8d31-kolla-config\") pod \"memcached-0\" (UID: \"c41250df-9d98-441a-a88f-6a17034b8d31\") " pod="openstack/memcached-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.510394 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c41250df-9d98-441a-a88f-6a17034b8d31-config-data\") pod \"memcached-0\" (UID: \"c41250df-9d98-441a-a88f-6a17034b8d31\") " pod="openstack/memcached-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.514479 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c41250df-9d98-441a-a88f-6a17034b8d31-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c41250df-9d98-441a-a88f-6a17034b8d31\") " pod="openstack/memcached-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.526377 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c41250df-9d98-441a-a88f-6a17034b8d31-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c41250df-9d98-441a-a88f-6a17034b8d31\") " pod="openstack/memcached-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.528945 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78t26\" (UniqueName: \"kubernetes.io/projected/c41250df-9d98-441a-a88f-6a17034b8d31-kube-api-access-78t26\") pod \"memcached-0\" (UID: \"c41250df-9d98-441a-a88f-6a17034b8d31\") " pod="openstack/memcached-0" Dec 05 10:40:14 crc kubenswrapper[4796]: I1205 10:40:14.584889 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 05 10:40:16 crc kubenswrapper[4796]: I1205 10:40:16.125374 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 10:40:16 crc kubenswrapper[4796]: I1205 10:40:16.126451 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 10:40:16 crc kubenswrapper[4796]: I1205 10:40:16.128227 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-xxjvn" Dec 05 10:40:16 crc kubenswrapper[4796]: I1205 10:40:16.134238 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 10:40:16 crc kubenswrapper[4796]: I1205 10:40:16.237397 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr9ft\" (UniqueName: \"kubernetes.io/projected/d25dde1a-1eba-4b84-b637-134daea7451e-kube-api-access-sr9ft\") pod \"kube-state-metrics-0\" (UID: \"d25dde1a-1eba-4b84-b637-134daea7451e\") " pod="openstack/kube-state-metrics-0" Dec 05 10:40:16 crc kubenswrapper[4796]: I1205 10:40:16.338662 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr9ft\" (UniqueName: \"kubernetes.io/projected/d25dde1a-1eba-4b84-b637-134daea7451e-kube-api-access-sr9ft\") pod \"kube-state-metrics-0\" (UID: \"d25dde1a-1eba-4b84-b637-134daea7451e\") " pod="openstack/kube-state-metrics-0" Dec 05 10:40:16 crc kubenswrapper[4796]: I1205 10:40:16.355398 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr9ft\" (UniqueName: \"kubernetes.io/projected/d25dde1a-1eba-4b84-b637-134daea7451e-kube-api-access-sr9ft\") pod \"kube-state-metrics-0\" (UID: \"d25dde1a-1eba-4b84-b637-134daea7451e\") " pod="openstack/kube-state-metrics-0" Dec 05 10:40:16 crc kubenswrapper[4796]: I1205 10:40:16.446625 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 10:40:19 crc kubenswrapper[4796]: I1205 10:40:19.945531 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vb5l5"] Dec 05 10:40:19 crc kubenswrapper[4796]: I1205 10:40:19.946916 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:19 crc kubenswrapper[4796]: I1205 10:40:19.949617 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 05 10:40:19 crc kubenswrapper[4796]: I1205 10:40:19.949719 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-f6bcd" Dec 05 10:40:19 crc kubenswrapper[4796]: I1205 10:40:19.949622 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 05 10:40:19 crc kubenswrapper[4796]: I1205 10:40:19.951069 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-z7p8z"] Dec 05 10:40:19 crc kubenswrapper[4796]: I1205 10:40:19.952575 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:40:19 crc kubenswrapper[4796]: I1205 10:40:19.955113 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vb5l5"] Dec 05 10:40:19 crc kubenswrapper[4796]: I1205 10:40:19.960036 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-z7p8z"] Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.002621 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 10:40:20 crc kubenswrapper[4796]: W1205 10:40:20.015400 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf735325f_6e38_45a2_a5bd_9ad19c40b36f.slice/crio-3b48ae83e0524095cbdd1c3e5a8024047b535d12336875e63b809305fa55ae01 WatchSource:0}: Error finding container 3b48ae83e0524095cbdd1c3e5a8024047b535d12336875e63b809305fa55ae01: Status 404 returned error can't find the container with id 3b48ae83e0524095cbdd1c3e5a8024047b535d12336875e63b809305fa55ae01 Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.099086 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f90214e2-6d6d-42b7-8a46-0fb779d31cba-var-lib\") pod \"ovn-controller-ovs-z7p8z\" (UID: \"f90214e2-6d6d-42b7-8a46-0fb779d31cba\") " pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.099436 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f90214e2-6d6d-42b7-8a46-0fb779d31cba-var-log\") pod \"ovn-controller-ovs-z7p8z\" (UID: \"f90214e2-6d6d-42b7-8a46-0fb779d31cba\") " pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.099478 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrc56\" (UniqueName: \"kubernetes.io/projected/f90214e2-6d6d-42b7-8a46-0fb779d31cba-kube-api-access-mrc56\") pod \"ovn-controller-ovs-z7p8z\" (UID: \"f90214e2-6d6d-42b7-8a46-0fb779d31cba\") " pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.099515 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f5f2848-5f90-4a9f-a6f0-b6e83b586402-ovn-controller-tls-certs\") pod \"ovn-controller-vb5l5\" (UID: \"2f5f2848-5f90-4a9f-a6f0-b6e83b586402\") " pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.099537 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f5f2848-5f90-4a9f-a6f0-b6e83b586402-scripts\") pod \"ovn-controller-vb5l5\" (UID: \"2f5f2848-5f90-4a9f-a6f0-b6e83b586402\") " pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.099569 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f5f2848-5f90-4a9f-a6f0-b6e83b586402-var-log-ovn\") pod \"ovn-controller-vb5l5\" (UID: \"2f5f2848-5f90-4a9f-a6f0-b6e83b586402\") " pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.099605 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f5f2848-5f90-4a9f-a6f0-b6e83b586402-var-run-ovn\") pod \"ovn-controller-vb5l5\" (UID: \"2f5f2848-5f90-4a9f-a6f0-b6e83b586402\") " pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.099656 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d85jk\" (UniqueName: \"kubernetes.io/projected/2f5f2848-5f90-4a9f-a6f0-b6e83b586402-kube-api-access-d85jk\") pod \"ovn-controller-vb5l5\" (UID: \"2f5f2848-5f90-4a9f-a6f0-b6e83b586402\") " pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.099721 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f90214e2-6d6d-42b7-8a46-0fb779d31cba-etc-ovs\") pod \"ovn-controller-ovs-z7p8z\" (UID: \"f90214e2-6d6d-42b7-8a46-0fb779d31cba\") " pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.099740 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f5f2848-5f90-4a9f-a6f0-b6e83b586402-combined-ca-bundle\") pod \"ovn-controller-vb5l5\" (UID: \"2f5f2848-5f90-4a9f-a6f0-b6e83b586402\") " pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.099757 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f90214e2-6d6d-42b7-8a46-0fb779d31cba-var-run\") pod \"ovn-controller-ovs-z7p8z\" (UID: \"f90214e2-6d6d-42b7-8a46-0fb779d31cba\") " pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.099782 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f90214e2-6d6d-42b7-8a46-0fb779d31cba-scripts\") pod \"ovn-controller-ovs-z7p8z\" (UID: \"f90214e2-6d6d-42b7-8a46-0fb779d31cba\") " pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.099852 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f5f2848-5f90-4a9f-a6f0-b6e83b586402-var-run\") pod \"ovn-controller-vb5l5\" (UID: \"2f5f2848-5f90-4a9f-a6f0-b6e83b586402\") " pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.109703 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.119632 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 05 10:40:20 crc kubenswrapper[4796]: W1205 10:40:20.121219 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd25dde1a_1eba_4b84_b637_134daea7451e.slice/crio-d9904b1d0ad69d9d856611bc8401e0deb2d0222e9a98f739e11fd3ff551e87e2 WatchSource:0}: Error finding container d9904b1d0ad69d9d856611bc8401e0deb2d0222e9a98f739e11fd3ff551e87e2: Status 404 returned error can't find the container with id d9904b1d0ad69d9d856611bc8401e0deb2d0222e9a98f739e11fd3ff551e87e2 Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.125963 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.139034 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.150735 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.201666 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f5f2848-5f90-4a9f-a6f0-b6e83b586402-ovn-controller-tls-certs\") pod \"ovn-controller-vb5l5\" (UID: \"2f5f2848-5f90-4a9f-a6f0-b6e83b586402\") " pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.201734 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f5f2848-5f90-4a9f-a6f0-b6e83b586402-scripts\") pod \"ovn-controller-vb5l5\" (UID: \"2f5f2848-5f90-4a9f-a6f0-b6e83b586402\") " pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.201778 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f5f2848-5f90-4a9f-a6f0-b6e83b586402-var-log-ovn\") pod \"ovn-controller-vb5l5\" (UID: \"2f5f2848-5f90-4a9f-a6f0-b6e83b586402\") " pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.201811 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f5f2848-5f90-4a9f-a6f0-b6e83b586402-var-run-ovn\") pod \"ovn-controller-vb5l5\" (UID: \"2f5f2848-5f90-4a9f-a6f0-b6e83b586402\") " pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.201837 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d85jk\" (UniqueName: \"kubernetes.io/projected/2f5f2848-5f90-4a9f-a6f0-b6e83b586402-kube-api-access-d85jk\") pod \"ovn-controller-vb5l5\" (UID: \"2f5f2848-5f90-4a9f-a6f0-b6e83b586402\") " pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.201872 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f90214e2-6d6d-42b7-8a46-0fb779d31cba-etc-ovs\") pod \"ovn-controller-ovs-z7p8z\" (UID: \"f90214e2-6d6d-42b7-8a46-0fb779d31cba\") " pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.201889 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f5f2848-5f90-4a9f-a6f0-b6e83b586402-combined-ca-bundle\") pod \"ovn-controller-vb5l5\" (UID: \"2f5f2848-5f90-4a9f-a6f0-b6e83b586402\") " pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.201907 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f90214e2-6d6d-42b7-8a46-0fb779d31cba-var-run\") pod \"ovn-controller-ovs-z7p8z\" (UID: \"f90214e2-6d6d-42b7-8a46-0fb779d31cba\") " pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.201926 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f90214e2-6d6d-42b7-8a46-0fb779d31cba-scripts\") pod \"ovn-controller-ovs-z7p8z\" (UID: \"f90214e2-6d6d-42b7-8a46-0fb779d31cba\") " pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.201943 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f5f2848-5f90-4a9f-a6f0-b6e83b586402-var-run\") pod \"ovn-controller-vb5l5\" (UID: \"2f5f2848-5f90-4a9f-a6f0-b6e83b586402\") " pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.201968 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f90214e2-6d6d-42b7-8a46-0fb779d31cba-var-lib\") pod \"ovn-controller-ovs-z7p8z\" (UID: \"f90214e2-6d6d-42b7-8a46-0fb779d31cba\") " pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.201988 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f90214e2-6d6d-42b7-8a46-0fb779d31cba-var-log\") pod \"ovn-controller-ovs-z7p8z\" (UID: \"f90214e2-6d6d-42b7-8a46-0fb779d31cba\") " pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.202018 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrc56\" (UniqueName: \"kubernetes.io/projected/f90214e2-6d6d-42b7-8a46-0fb779d31cba-kube-api-access-mrc56\") pod \"ovn-controller-ovs-z7p8z\" (UID: \"f90214e2-6d6d-42b7-8a46-0fb779d31cba\") " pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.202659 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/f90214e2-6d6d-42b7-8a46-0fb779d31cba-etc-ovs\") pod \"ovn-controller-ovs-z7p8z\" (UID: \"f90214e2-6d6d-42b7-8a46-0fb779d31cba\") " pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.202665 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f5f2848-5f90-4a9f-a6f0-b6e83b586402-var-run-ovn\") pod \"ovn-controller-vb5l5\" (UID: \"2f5f2848-5f90-4a9f-a6f0-b6e83b586402\") " pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.202825 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f90214e2-6d6d-42b7-8a46-0fb779d31cba-var-run\") pod \"ovn-controller-ovs-z7p8z\" (UID: \"f90214e2-6d6d-42b7-8a46-0fb779d31cba\") " pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.202836 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2f5f2848-5f90-4a9f-a6f0-b6e83b586402-var-log-ovn\") pod \"ovn-controller-vb5l5\" (UID: \"2f5f2848-5f90-4a9f-a6f0-b6e83b586402\") " pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.202861 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f90214e2-6d6d-42b7-8a46-0fb779d31cba-var-log\") pod \"ovn-controller-ovs-z7p8z\" (UID: \"f90214e2-6d6d-42b7-8a46-0fb779d31cba\") " pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.202851 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/f90214e2-6d6d-42b7-8a46-0fb779d31cba-var-lib\") pod \"ovn-controller-ovs-z7p8z\" (UID: \"f90214e2-6d6d-42b7-8a46-0fb779d31cba\") " pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.202885 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2f5f2848-5f90-4a9f-a6f0-b6e83b586402-var-run\") pod \"ovn-controller-vb5l5\" (UID: \"2f5f2848-5f90-4a9f-a6f0-b6e83b586402\") " pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.204256 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f5f2848-5f90-4a9f-a6f0-b6e83b586402-scripts\") pod \"ovn-controller-vb5l5\" (UID: \"2f5f2848-5f90-4a9f-a6f0-b6e83b586402\") " pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.205071 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f90214e2-6d6d-42b7-8a46-0fb779d31cba-scripts\") pod \"ovn-controller-ovs-z7p8z\" (UID: \"f90214e2-6d6d-42b7-8a46-0fb779d31cba\") " pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.206408 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f5f2848-5f90-4a9f-a6f0-b6e83b586402-combined-ca-bundle\") pod \"ovn-controller-vb5l5\" (UID: \"2f5f2848-5f90-4a9f-a6f0-b6e83b586402\") " pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.207130 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f5f2848-5f90-4a9f-a6f0-b6e83b586402-ovn-controller-tls-certs\") pod \"ovn-controller-vb5l5\" (UID: \"2f5f2848-5f90-4a9f-a6f0-b6e83b586402\") " pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.216552 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrc56\" (UniqueName: \"kubernetes.io/projected/f90214e2-6d6d-42b7-8a46-0fb779d31cba-kube-api-access-mrc56\") pod \"ovn-controller-ovs-z7p8z\" (UID: \"f90214e2-6d6d-42b7-8a46-0fb779d31cba\") " pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.217407 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d85jk\" (UniqueName: \"kubernetes.io/projected/2f5f2848-5f90-4a9f-a6f0-b6e83b586402-kube-api-access-d85jk\") pod \"ovn-controller-vb5l5\" (UID: \"2f5f2848-5f90-4a9f-a6f0-b6e83b586402\") " pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.262706 4796 generic.go:334] "Generic (PLEG): container finished" podID="65453db8-d128-4c96-801f-78ae59451fe5" containerID="50c3a958fc7f348336bf8a44096208be04b86de320c553482e53a4f56ba07a0d" exitCode=0 Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.262809 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567c455747-m98ht" event={"ID":"65453db8-d128-4c96-801f-78ae59451fe5","Type":"ContainerDied","Data":"50c3a958fc7f348336bf8a44096208be04b86de320c553482e53a4f56ba07a0d"} Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.264723 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e649ad8d-0ed0-495c-9abc-7220d750f060","Type":"ContainerStarted","Data":"ba48303b44ee04acf1aadb4fb79a09a6668f41ab72c90863feaa7d7406d15726"} Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.269053 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d25dde1a-1eba-4b84-b637-134daea7451e","Type":"ContainerStarted","Data":"d9904b1d0ad69d9d856611bc8401e0deb2d0222e9a98f739e11fd3ff551e87e2"} Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.270585 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1a61d456-9eea-447f-b576-77473222d108","Type":"ContainerStarted","Data":"a7c9b0ab82d87d579412d02366af28e420fa8ec0bf60ad411a27483abbd41d01"} Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.273725 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0deffc65-bff4-419f-aa12-2c17432112a3","Type":"ContainerStarted","Data":"ea638a33f4ddbcd873b4a51b80736393efae734fd08bf49937e7ffdac4b4df9a"} Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.275746 4796 generic.go:334] "Generic (PLEG): container finished" podID="a7b55fe6-1d8f-4945-bba8-3c9c7bff3090" containerID="bec329e9587bb0556071c77f514f01d12c55b1cb5a0dc9fa87ae758058139b4c" exitCode=0 Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.276063 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb666b895-qsrsm" event={"ID":"a7b55fe6-1d8f-4945-bba8-3c9c7bff3090","Type":"ContainerDied","Data":"bec329e9587bb0556071c77f514f01d12c55b1cb5a0dc9fa87ae758058139b4c"} Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.282163 4796 generic.go:334] "Generic (PLEG): container finished" podID="9b711e05-6b25-48ff-9228-abaab40dae1e" containerID="615d889f302fb601d45c301abf4e06eb2ca164f18248caa1037612b1a964936c" exitCode=0 Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.282739 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd484bb89-pk8qm" event={"ID":"9b711e05-6b25-48ff-9228-abaab40dae1e","Type":"ContainerDied","Data":"615d889f302fb601d45c301abf4e06eb2ca164f18248caa1037612b1a964936c"} Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.285522 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f735325f-6e38-45a2-a5bd-9ad19c40b36f","Type":"ContainerStarted","Data":"3b48ae83e0524095cbdd1c3e5a8024047b535d12336875e63b809305fa55ae01"} Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.287920 4796 generic.go:334] "Generic (PLEG): container finished" podID="d89aa469-fd41-4537-935b-5a03efe42ce8" containerID="9592d37190b7a01734fdb2d5a9a25ca32b851bc9328d6aba100851a3d0ef5795" exitCode=0 Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.288011 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bc4b48fc9-h6zwn" event={"ID":"d89aa469-fd41-4537-935b-5a03efe42ce8","Type":"ContainerDied","Data":"9592d37190b7a01734fdb2d5a9a25ca32b851bc9328d6aba100851a3d0ef5795"} Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.292067 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c41250df-9d98-441a-a88f-6a17034b8d31","Type":"ContainerStarted","Data":"aa555556459d1d47c8412cec825cf1f28a3f31a9a2f746363e2489319a4f0d4f"} Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.303185 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dnpfk"] Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.324105 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.328837 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:40:20 crc kubenswrapper[4796]: E1205 10:40:20.522289 4796 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Dec 05 10:40:20 crc kubenswrapper[4796]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/d89aa469-fd41-4537-935b-5a03efe42ce8/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 05 10:40:20 crc kubenswrapper[4796]: > podSandboxID="a7ed644e42d27c8c6cb05e57a930870cc63ddc8f619ff9328e13958557ac5076" Dec 05 10:40:20 crc kubenswrapper[4796]: E1205 10:40:20.525131 4796 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 05 10:40:20 crc kubenswrapper[4796]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9n22b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-bc4b48fc9-h6zwn_openstack(d89aa469-fd41-4537-935b-5a03efe42ce8): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/d89aa469-fd41-4537-935b-5a03efe42ce8/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 05 10:40:20 crc kubenswrapper[4796]: > logger="UnhandledError" Dec 05 10:40:20 crc kubenswrapper[4796]: E1205 10:40:20.527213 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/d89aa469-fd41-4537-935b-5a03efe42ce8/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-bc4b48fc9-h6zwn" podUID="d89aa469-fd41-4537-935b-5a03efe42ce8" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.698738 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567c455747-m98ht" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.778246 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd484bb89-pk8qm" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.816535 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzllb\" (UniqueName: \"kubernetes.io/projected/65453db8-d128-4c96-801f-78ae59451fe5-kube-api-access-jzllb\") pod \"65453db8-d128-4c96-801f-78ae59451fe5\" (UID: \"65453db8-d128-4c96-801f-78ae59451fe5\") " Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.817434 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65453db8-d128-4c96-801f-78ae59451fe5-dns-svc\") pod \"65453db8-d128-4c96-801f-78ae59451fe5\" (UID: \"65453db8-d128-4c96-801f-78ae59451fe5\") " Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.817482 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65453db8-d128-4c96-801f-78ae59451fe5-config\") pod \"65453db8-d128-4c96-801f-78ae59451fe5\" (UID: \"65453db8-d128-4c96-801f-78ae59451fe5\") " Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.824436 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65453db8-d128-4c96-801f-78ae59451fe5-kube-api-access-jzllb" (OuterVolumeSpecName: "kube-api-access-jzllb") pod "65453db8-d128-4c96-801f-78ae59451fe5" (UID: "65453db8-d128-4c96-801f-78ae59451fe5"). InnerVolumeSpecName "kube-api-access-jzllb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.830107 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vb5l5"] Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.833395 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65453db8-d128-4c96-801f-78ae59451fe5-config" (OuterVolumeSpecName: "config") pod "65453db8-d128-4c96-801f-78ae59451fe5" (UID: "65453db8-d128-4c96-801f-78ae59451fe5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.841313 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65453db8-d128-4c96-801f-78ae59451fe5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "65453db8-d128-4c96-801f-78ae59451fe5" (UID: "65453db8-d128-4c96-801f-78ae59451fe5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.918925 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj2s7\" (UniqueName: \"kubernetes.io/projected/9b711e05-6b25-48ff-9228-abaab40dae1e-kube-api-access-pj2s7\") pod \"9b711e05-6b25-48ff-9228-abaab40dae1e\" (UID: \"9b711e05-6b25-48ff-9228-abaab40dae1e\") " Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.919241 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b711e05-6b25-48ff-9228-abaab40dae1e-config\") pod \"9b711e05-6b25-48ff-9228-abaab40dae1e\" (UID: \"9b711e05-6b25-48ff-9228-abaab40dae1e\") " Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.919581 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65453db8-d128-4c96-801f-78ae59451fe5-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.919592 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65453db8-d128-4c96-801f-78ae59451fe5-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.919601 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzllb\" (UniqueName: \"kubernetes.io/projected/65453db8-d128-4c96-801f-78ae59451fe5-kube-api-access-jzllb\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.924405 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b711e05-6b25-48ff-9228-abaab40dae1e-kube-api-access-pj2s7" (OuterVolumeSpecName: "kube-api-access-pj2s7") pod "9b711e05-6b25-48ff-9228-abaab40dae1e" (UID: "9b711e05-6b25-48ff-9228-abaab40dae1e"). InnerVolumeSpecName "kube-api-access-pj2s7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.938899 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b711e05-6b25-48ff-9228-abaab40dae1e-config" (OuterVolumeSpecName: "config") pod "9b711e05-6b25-48ff-9228-abaab40dae1e" (UID: "9b711e05-6b25-48ff-9228-abaab40dae1e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:40:20 crc kubenswrapper[4796]: I1205 10:40:20.949785 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-z7p8z"] Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.020579 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj2s7\" (UniqueName: \"kubernetes.io/projected/9b711e05-6b25-48ff-9228-abaab40dae1e-kube-api-access-pj2s7\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.020604 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b711e05-6b25-48ff-9228-abaab40dae1e-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:21 crc kubenswrapper[4796]: W1205 10:40:21.111305 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f5f2848_5f90_4a9f_a6f0_b6e83b586402.slice/crio-cc011bac6304184761149c6ec78f812b7ebf0e549be969c9c9aafc9f2bfd27ba WatchSource:0}: Error finding container cc011bac6304184761149c6ec78f812b7ebf0e549be969c9c9aafc9f2bfd27ba: Status 404 returned error can't find the container with id cc011bac6304184761149c6ec78f812b7ebf0e549be969c9c9aafc9f2bfd27ba Dec 05 10:40:21 crc kubenswrapper[4796]: W1205 10:40:21.112993 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf90214e2_6d6d_42b7_8a46_0fb779d31cba.slice/crio-fb7238178fab972835af030bb9016b513db1b3af5a502047715deca224c6b69b WatchSource:0}: Error finding container fb7238178fab972835af030bb9016b513db1b3af5a502047715deca224c6b69b: Status 404 returned error can't find the container with id fb7238178fab972835af030bb9016b513db1b3af5a502047715deca224c6b69b Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.301003 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vb5l5" event={"ID":"2f5f2848-5f90-4a9f-a6f0-b6e83b586402","Type":"ContainerStarted","Data":"cc011bac6304184761149c6ec78f812b7ebf0e549be969c9c9aafc9f2bfd27ba"} Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.303534 4796 generic.go:334] "Generic (PLEG): container finished" podID="c56d752f-ca0a-4be7-b05b-381b1a5fdfc6" containerID="970266e6f8c4beaae3b9e78d35f869c388cc5af3909701199c94feda734c59ec" exitCode=0 Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.303607 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnpfk" event={"ID":"c56d752f-ca0a-4be7-b05b-381b1a5fdfc6","Type":"ContainerDied","Data":"970266e6f8c4beaae3b9e78d35f869c388cc5af3909701199c94feda734c59ec"} Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.303631 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnpfk" event={"ID":"c56d752f-ca0a-4be7-b05b-381b1a5fdfc6","Type":"ContainerStarted","Data":"193fecf62968d4ede0e6e8123e9fc1fdc6cd228025aa167e350909a7a4554b36"} Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.307250 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z7p8z" event={"ID":"f90214e2-6d6d-42b7-8a46-0fb779d31cba","Type":"ContainerStarted","Data":"fb7238178fab972835af030bb9016b513db1b3af5a502047715deca224c6b69b"} Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.309313 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb666b895-qsrsm" event={"ID":"a7b55fe6-1d8f-4945-bba8-3c9c7bff3090","Type":"ContainerStarted","Data":"95c1a213f91168989baa795d702a11c376ecd769c9dd11ff9f24833e7707ea3d"} Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.309410 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cb666b895-qsrsm" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.312878 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd484bb89-pk8qm" event={"ID":"9b711e05-6b25-48ff-9228-abaab40dae1e","Type":"ContainerDied","Data":"ae857ff9625bb575645aeab68eae90796da3b21008cbef8e51232cfaaf8cd44c"} Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.312914 4796 scope.go:117] "RemoveContainer" containerID="615d889f302fb601d45c301abf4e06eb2ca164f18248caa1037612b1a964936c" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.312975 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd484bb89-pk8qm" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.315021 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-567c455747-m98ht" event={"ID":"65453db8-d128-4c96-801f-78ae59451fe5","Type":"ContainerDied","Data":"5c3e12d99e3807322be3f51b63fb1267afa5880993a296db090f003d70a54298"} Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.315814 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-567c455747-m98ht" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.334975 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cb666b895-qsrsm" podStartSLOduration=5.260950731 podStartE2EDuration="12.334953446s" podCreationTimestamp="2025-12-05 10:40:09 +0000 UTC" firstStartedPulling="2025-12-05 10:40:12.523834119 +0000 UTC m=+758.811939632" lastFinishedPulling="2025-12-05 10:40:19.597836834 +0000 UTC m=+765.885942347" observedRunningTime="2025-12-05 10:40:21.334921676 +0000 UTC m=+767.623027189" watchObservedRunningTime="2025-12-05 10:40:21.334953446 +0000 UTC m=+767.623058959" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.400464 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-thfdb"] Dec 05 10:40:21 crc kubenswrapper[4796]: E1205 10:40:21.401077 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65453db8-d128-4c96-801f-78ae59451fe5" containerName="init" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.401089 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="65453db8-d128-4c96-801f-78ae59451fe5" containerName="init" Dec 05 10:40:21 crc kubenswrapper[4796]: E1205 10:40:21.401126 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b711e05-6b25-48ff-9228-abaab40dae1e" containerName="init" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.401132 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b711e05-6b25-48ff-9228-abaab40dae1e" containerName="init" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.403909 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b711e05-6b25-48ff-9228-abaab40dae1e" containerName="init" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.403929 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="65453db8-d128-4c96-801f-78ae59451fe5" containerName="init" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.404860 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-thfdb" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.407770 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.407924 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.419465 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-thfdb"] Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.423220 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cd484bb89-pk8qm"] Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.432395 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cd484bb89-pk8qm"] Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.466745 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-567c455747-m98ht"] Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.491102 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-567c455747-m98ht"] Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.529078 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/dbbb0427-a102-4a95-a44e-d809d4334090-ovn-rundir\") pod \"ovn-controller-metrics-thfdb\" (UID: \"dbbb0427-a102-4a95-a44e-d809d4334090\") " pod="openstack/ovn-controller-metrics-thfdb" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.529199 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbbb0427-a102-4a95-a44e-d809d4334090-config\") pod \"ovn-controller-metrics-thfdb\" (UID: \"dbbb0427-a102-4a95-a44e-d809d4334090\") " pod="openstack/ovn-controller-metrics-thfdb" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.529218 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbbb0427-a102-4a95-a44e-d809d4334090-combined-ca-bundle\") pod \"ovn-controller-metrics-thfdb\" (UID: \"dbbb0427-a102-4a95-a44e-d809d4334090\") " pod="openstack/ovn-controller-metrics-thfdb" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.529299 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbbb0427-a102-4a95-a44e-d809d4334090-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-thfdb\" (UID: \"dbbb0427-a102-4a95-a44e-d809d4334090\") " pod="openstack/ovn-controller-metrics-thfdb" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.529322 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxqrr\" (UniqueName: \"kubernetes.io/projected/dbbb0427-a102-4a95-a44e-d809d4334090-kube-api-access-kxqrr\") pod \"ovn-controller-metrics-thfdb\" (UID: \"dbbb0427-a102-4a95-a44e-d809d4334090\") " pod="openstack/ovn-controller-metrics-thfdb" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.529565 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/dbbb0427-a102-4a95-a44e-d809d4334090-ovs-rundir\") pod \"ovn-controller-metrics-thfdb\" (UID: \"dbbb0427-a102-4a95-a44e-d809d4334090\") " pod="openstack/ovn-controller-metrics-thfdb" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.631521 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/dbbb0427-a102-4a95-a44e-d809d4334090-ovn-rundir\") pod \"ovn-controller-metrics-thfdb\" (UID: \"dbbb0427-a102-4a95-a44e-d809d4334090\") " pod="openstack/ovn-controller-metrics-thfdb" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.631581 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbbb0427-a102-4a95-a44e-d809d4334090-config\") pod \"ovn-controller-metrics-thfdb\" (UID: \"dbbb0427-a102-4a95-a44e-d809d4334090\") " pod="openstack/ovn-controller-metrics-thfdb" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.631600 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbbb0427-a102-4a95-a44e-d809d4334090-combined-ca-bundle\") pod \"ovn-controller-metrics-thfdb\" (UID: \"dbbb0427-a102-4a95-a44e-d809d4334090\") " pod="openstack/ovn-controller-metrics-thfdb" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.631646 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbbb0427-a102-4a95-a44e-d809d4334090-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-thfdb\" (UID: \"dbbb0427-a102-4a95-a44e-d809d4334090\") " pod="openstack/ovn-controller-metrics-thfdb" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.631665 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxqrr\" (UniqueName: \"kubernetes.io/projected/dbbb0427-a102-4a95-a44e-d809d4334090-kube-api-access-kxqrr\") pod \"ovn-controller-metrics-thfdb\" (UID: \"dbbb0427-a102-4a95-a44e-d809d4334090\") " pod="openstack/ovn-controller-metrics-thfdb" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.631829 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/dbbb0427-a102-4a95-a44e-d809d4334090-ovs-rundir\") pod \"ovn-controller-metrics-thfdb\" (UID: \"dbbb0427-a102-4a95-a44e-d809d4334090\") " pod="openstack/ovn-controller-metrics-thfdb" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.632149 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/dbbb0427-a102-4a95-a44e-d809d4334090-ovn-rundir\") pod \"ovn-controller-metrics-thfdb\" (UID: \"dbbb0427-a102-4a95-a44e-d809d4334090\") " pod="openstack/ovn-controller-metrics-thfdb" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.632444 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbbb0427-a102-4a95-a44e-d809d4334090-config\") pod \"ovn-controller-metrics-thfdb\" (UID: \"dbbb0427-a102-4a95-a44e-d809d4334090\") " pod="openstack/ovn-controller-metrics-thfdb" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.632545 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/dbbb0427-a102-4a95-a44e-d809d4334090-ovs-rundir\") pod \"ovn-controller-metrics-thfdb\" (UID: \"dbbb0427-a102-4a95-a44e-d809d4334090\") " pod="openstack/ovn-controller-metrics-thfdb" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.651829 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbbb0427-a102-4a95-a44e-d809d4334090-combined-ca-bundle\") pod \"ovn-controller-metrics-thfdb\" (UID: \"dbbb0427-a102-4a95-a44e-d809d4334090\") " pod="openstack/ovn-controller-metrics-thfdb" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.651937 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbbb0427-a102-4a95-a44e-d809d4334090-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-thfdb\" (UID: \"dbbb0427-a102-4a95-a44e-d809d4334090\") " pod="openstack/ovn-controller-metrics-thfdb" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.656898 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxqrr\" (UniqueName: \"kubernetes.io/projected/dbbb0427-a102-4a95-a44e-d809d4334090-kube-api-access-kxqrr\") pod \"ovn-controller-metrics-thfdb\" (UID: \"dbbb0427-a102-4a95-a44e-d809d4334090\") " pod="openstack/ovn-controller-metrics-thfdb" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.663022 4796 scope.go:117] "RemoveContainer" containerID="50c3a958fc7f348336bf8a44096208be04b86de320c553482e53a4f56ba07a0d" Dec 05 10:40:21 crc kubenswrapper[4796]: I1205 10:40:21.720206 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-thfdb" Dec 05 10:40:22 crc kubenswrapper[4796]: I1205 10:40:22.049691 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65453db8-d128-4c96-801f-78ae59451fe5" path="/var/lib/kubelet/pods/65453db8-d128-4c96-801f-78ae59451fe5/volumes" Dec 05 10:40:22 crc kubenswrapper[4796]: I1205 10:40:22.050175 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b711e05-6b25-48ff-9228-abaab40dae1e" path="/var/lib/kubelet/pods/9b711e05-6b25-48ff-9228-abaab40dae1e/volumes" Dec 05 10:40:22 crc kubenswrapper[4796]: I1205 10:40:22.902194 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 10:40:22 crc kubenswrapper[4796]: I1205 10:40:22.904082 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:22 crc kubenswrapper[4796]: I1205 10:40:22.908419 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 05 10:40:22 crc kubenswrapper[4796]: I1205 10:40:22.908524 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 05 10:40:22 crc kubenswrapper[4796]: I1205 10:40:22.908566 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-cq48h" Dec 05 10:40:22 crc kubenswrapper[4796]: I1205 10:40:22.908586 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 05 10:40:22 crc kubenswrapper[4796]: I1205 10:40:22.929047 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.057694 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c35af0e-1df4-4529-a60f-3be3faaf8ec2-config\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.057737 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c35af0e-1df4-4529-a60f-3be3faaf8ec2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.057912 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c35af0e-1df4-4529-a60f-3be3faaf8ec2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.058158 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.058244 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c35af0e-1df4-4529-a60f-3be3faaf8ec2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.058325 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c35af0e-1df4-4529-a60f-3be3faaf8ec2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.058388 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c35af0e-1df4-4529-a60f-3be3faaf8ec2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.058451 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd5n7\" (UniqueName: \"kubernetes.io/projected/7c35af0e-1df4-4529-a60f-3be3faaf8ec2-kube-api-access-vd5n7\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.113269 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.115166 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.117454 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.117514 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-tdntp" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.117767 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.118421 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.131414 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.160367 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c35af0e-1df4-4529-a60f-3be3faaf8ec2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.160499 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.160567 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c35af0e-1df4-4529-a60f-3be3faaf8ec2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.160611 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c35af0e-1df4-4529-a60f-3be3faaf8ec2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.160636 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c35af0e-1df4-4529-a60f-3be3faaf8ec2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.160659 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd5n7\" (UniqueName: \"kubernetes.io/projected/7c35af0e-1df4-4529-a60f-3be3faaf8ec2-kube-api-access-vd5n7\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.160757 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c35af0e-1df4-4529-a60f-3be3faaf8ec2-config\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.160783 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c35af0e-1df4-4529-a60f-3be3faaf8ec2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.161142 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.161250 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7c35af0e-1df4-4529-a60f-3be3faaf8ec2-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.161884 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c35af0e-1df4-4529-a60f-3be3faaf8ec2-config\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.162028 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c35af0e-1df4-4529-a60f-3be3faaf8ec2-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.167027 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c35af0e-1df4-4529-a60f-3be3faaf8ec2-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.167481 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c35af0e-1df4-4529-a60f-3be3faaf8ec2-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.176473 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c35af0e-1df4-4529-a60f-3be3faaf8ec2-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.177795 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd5n7\" (UniqueName: \"kubernetes.io/projected/7c35af0e-1df4-4529-a60f-3be3faaf8ec2-kube-api-access-vd5n7\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.179925 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7c35af0e-1df4-4529-a60f-3be3faaf8ec2\") " pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.220569 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.263032 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fjt9\" (UniqueName: \"kubernetes.io/projected/929f06e1-44b6-4ce2-9391-4d41a94538fb-kube-api-access-4fjt9\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.263114 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/929f06e1-44b6-4ce2-9391-4d41a94538fb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.263165 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/929f06e1-44b6-4ce2-9391-4d41a94538fb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.263280 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.263315 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/929f06e1-44b6-4ce2-9391-4d41a94538fb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.263338 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/929f06e1-44b6-4ce2-9391-4d41a94538fb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.263419 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/929f06e1-44b6-4ce2-9391-4d41a94538fb-config\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.263521 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/929f06e1-44b6-4ce2-9391-4d41a94538fb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.365481 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/929f06e1-44b6-4ce2-9391-4d41a94538fb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.365563 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fjt9\" (UniqueName: \"kubernetes.io/projected/929f06e1-44b6-4ce2-9391-4d41a94538fb-kube-api-access-4fjt9\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.365604 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/929f06e1-44b6-4ce2-9391-4d41a94538fb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.365660 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/929f06e1-44b6-4ce2-9391-4d41a94538fb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.365693 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.365709 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/929f06e1-44b6-4ce2-9391-4d41a94538fb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.365731 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/929f06e1-44b6-4ce2-9391-4d41a94538fb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.365800 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/929f06e1-44b6-4ce2-9391-4d41a94538fb-config\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.365892 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.366278 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/929f06e1-44b6-4ce2-9391-4d41a94538fb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.366477 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/929f06e1-44b6-4ce2-9391-4d41a94538fb-config\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.366739 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/929f06e1-44b6-4ce2-9391-4d41a94538fb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.370069 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/929f06e1-44b6-4ce2-9391-4d41a94538fb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.370192 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/929f06e1-44b6-4ce2-9391-4d41a94538fb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.381274 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fjt9\" (UniqueName: \"kubernetes.io/projected/929f06e1-44b6-4ce2-9391-4d41a94538fb-kube-api-access-4fjt9\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.381971 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.389406 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/929f06e1-44b6-4ce2-9391-4d41a94538fb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"929f06e1-44b6-4ce2-9391-4d41a94538fb\") " pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:23 crc kubenswrapper[4796]: I1205 10:40:23.435652 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:25 crc kubenswrapper[4796]: I1205 10:40:25.559861 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cb666b895-qsrsm" Dec 05 10:40:25 crc kubenswrapper[4796]: I1205 10:40:25.600368 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bc4b48fc9-h6zwn"] Dec 05 10:40:27 crc kubenswrapper[4796]: I1205 10:40:27.374713 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-thfdb"] Dec 05 10:40:27 crc kubenswrapper[4796]: W1205 10:40:27.456872 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbbb0427_a102_4a95_a44e_d809d4334090.slice/crio-be8af9cd361b7e6205b0fb7acc14e24965b953755a0360d0cbaf88aae05b0f4c WatchSource:0}: Error finding container be8af9cd361b7e6205b0fb7acc14e24965b953755a0360d0cbaf88aae05b0f4c: Status 404 returned error can't find the container with id be8af9cd361b7e6205b0fb7acc14e24965b953755a0360d0cbaf88aae05b0f4c Dec 05 10:40:27 crc kubenswrapper[4796]: I1205 10:40:27.617532 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 05 10:40:27 crc kubenswrapper[4796]: W1205 10:40:27.664022 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c35af0e_1df4_4529_a60f_3be3faaf8ec2.slice/crio-04a45df04b977e5e619bf90d3ac3a69a9582ccafd908801839da0e77c7322561 WatchSource:0}: Error finding container 04a45df04b977e5e619bf90d3ac3a69a9582ccafd908801839da0e77c7322561: Status 404 returned error can't find the container with id 04a45df04b977e5e619bf90d3ac3a69a9582ccafd908801839da0e77c7322561 Dec 05 10:40:27 crc kubenswrapper[4796]: I1205 10:40:27.717641 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.366753 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bc4b48fc9-h6zwn" event={"ID":"d89aa469-fd41-4537-935b-5a03efe42ce8","Type":"ContainerStarted","Data":"d77ec8cb805aa6415249bc9354e453f61d0f8529aa3cf47c549e994891b1fb35"} Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.368970 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c41250df-9d98-441a-a88f-6a17034b8d31","Type":"ContainerStarted","Data":"1211fc2c4cea2002e17beb7165bf2b35231e9089ef4c7aaac2e90976b03ea655"} Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.366834 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bc4b48fc9-h6zwn" podUID="d89aa469-fd41-4537-935b-5a03efe42ce8" containerName="dnsmasq-dns" containerID="cri-o://d77ec8cb805aa6415249bc9354e453f61d0f8529aa3cf47c549e994891b1fb35" gracePeriod=10 Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.369013 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bc4b48fc9-h6zwn" Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.369134 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.371414 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vb5l5" event={"ID":"2f5f2848-5f90-4a9f-a6f0-b6e83b586402","Type":"ContainerStarted","Data":"690c07c8e0d41f06469ccb466da90430a6d7c8e2baf7fdaab4198a6eddf2636f"} Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.371532 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-vb5l5" Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.373135 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d25dde1a-1eba-4b84-b637-134daea7451e","Type":"ContainerStarted","Data":"26359e43cd858b1a5ea823cf04c23cf0b9c6ebc51e05ec16bcb98edd7baed580"} Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.373542 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.374321 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"929f06e1-44b6-4ce2-9391-4d41a94538fb","Type":"ContainerStarted","Data":"b1194228b0a8f7c2cdcc611af377a111cdb26aef61b88910a1a0ee0a2756ddbf"} Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.375631 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7c35af0e-1df4-4529-a60f-3be3faaf8ec2","Type":"ContainerStarted","Data":"04a45df04b977e5e619bf90d3ac3a69a9582ccafd908801839da0e77c7322561"} Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.378886 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-thfdb" event={"ID":"dbbb0427-a102-4a95-a44e-d809d4334090","Type":"ContainerStarted","Data":"be8af9cd361b7e6205b0fb7acc14e24965b953755a0360d0cbaf88aae05b0f4c"} Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.382434 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e649ad8d-0ed0-495c-9abc-7220d750f060","Type":"ContainerStarted","Data":"f6edee226d4f06edb7cf704432266a915b34de142ec0377b09bcd55bc742b371"} Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.385870 4796 generic.go:334] "Generic (PLEG): container finished" podID="c56d752f-ca0a-4be7-b05b-381b1a5fdfc6" containerID="fdc068df61968815befec7b4d3f76c3b0e75231061974b8bf20acc52b6f8dec0" exitCode=0 Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.385948 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnpfk" event={"ID":"c56d752f-ca0a-4be7-b05b-381b1a5fdfc6","Type":"ContainerDied","Data":"fdc068df61968815befec7b4d3f76c3b0e75231061974b8bf20acc52b6f8dec0"} Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.387211 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0deffc65-bff4-419f-aa12-2c17432112a3","Type":"ContainerStarted","Data":"78e5cf0b30b9ff766b08825fb641d13511b22fa49fafceba3d691566df22c5f5"} Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.389344 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z7p8z" event={"ID":"f90214e2-6d6d-42b7-8a46-0fb779d31cba","Type":"ContainerStarted","Data":"e15c8b7bdb071db27a222623d6916a0191075f9d4f8a41ace5dfa733ad113faa"} Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.392461 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bc4b48fc9-h6zwn" podStartSLOduration=10.269189451999999 podStartE2EDuration="19.392445629s" podCreationTimestamp="2025-12-05 10:40:09 +0000 UTC" firstStartedPulling="2025-12-05 10:40:10.474053125 +0000 UTC m=+756.762158638" lastFinishedPulling="2025-12-05 10:40:19.597309302 +0000 UTC m=+765.885414815" observedRunningTime="2025-12-05 10:40:28.381018742 +0000 UTC m=+774.669124254" watchObservedRunningTime="2025-12-05 10:40:28.392445629 +0000 UTC m=+774.680551133" Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.399264 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-vb5l5" podStartSLOduration=3.287478257 podStartE2EDuration="9.399254159s" podCreationTimestamp="2025-12-05 10:40:19 +0000 UTC" firstStartedPulling="2025-12-05 10:40:21.113118368 +0000 UTC m=+767.401223880" lastFinishedPulling="2025-12-05 10:40:27.224894269 +0000 UTC m=+773.512999782" observedRunningTime="2025-12-05 10:40:28.395910879 +0000 UTC m=+774.684016391" watchObservedRunningTime="2025-12-05 10:40:28.399254159 +0000 UTC m=+774.687359672" Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.412725 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=7.555028633 podStartE2EDuration="14.412640443s" podCreationTimestamp="2025-12-05 10:40:14 +0000 UTC" firstStartedPulling="2025-12-05 10:40:20.126585735 +0000 UTC m=+766.414691248" lastFinishedPulling="2025-12-05 10:40:26.984197544 +0000 UTC m=+773.272303058" observedRunningTime="2025-12-05 10:40:28.408869047 +0000 UTC m=+774.696974561" watchObservedRunningTime="2025-12-05 10:40:28.412640443 +0000 UTC m=+774.700745956" Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.425853 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=5.559541255 podStartE2EDuration="12.425841961s" podCreationTimestamp="2025-12-05 10:40:16 +0000 UTC" firstStartedPulling="2025-12-05 10:40:20.123899371 +0000 UTC m=+766.412004884" lastFinishedPulling="2025-12-05 10:40:26.990200077 +0000 UTC m=+773.278305590" observedRunningTime="2025-12-05 10:40:28.4228604 +0000 UTC m=+774.710965914" watchObservedRunningTime="2025-12-05 10:40:28.425841961 +0000 UTC m=+774.713947474" Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.871227 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc4b48fc9-h6zwn" Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.976947 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d89aa469-fd41-4537-935b-5a03efe42ce8-config\") pod \"d89aa469-fd41-4537-935b-5a03efe42ce8\" (UID: \"d89aa469-fd41-4537-935b-5a03efe42ce8\") " Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.977018 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d89aa469-fd41-4537-935b-5a03efe42ce8-dns-svc\") pod \"d89aa469-fd41-4537-935b-5a03efe42ce8\" (UID: \"d89aa469-fd41-4537-935b-5a03efe42ce8\") " Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.977044 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n22b\" (UniqueName: \"kubernetes.io/projected/d89aa469-fd41-4537-935b-5a03efe42ce8-kube-api-access-9n22b\") pod \"d89aa469-fd41-4537-935b-5a03efe42ce8\" (UID: \"d89aa469-fd41-4537-935b-5a03efe42ce8\") " Dec 05 10:40:28 crc kubenswrapper[4796]: I1205 10:40:28.982470 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d89aa469-fd41-4537-935b-5a03efe42ce8-kube-api-access-9n22b" (OuterVolumeSpecName: "kube-api-access-9n22b") pod "d89aa469-fd41-4537-935b-5a03efe42ce8" (UID: "d89aa469-fd41-4537-935b-5a03efe42ce8"). InnerVolumeSpecName "kube-api-access-9n22b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:40:29 crc kubenswrapper[4796]: I1205 10:40:29.013874 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d89aa469-fd41-4537-935b-5a03efe42ce8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d89aa469-fd41-4537-935b-5a03efe42ce8" (UID: "d89aa469-fd41-4537-935b-5a03efe42ce8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:40:29 crc kubenswrapper[4796]: I1205 10:40:29.014306 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d89aa469-fd41-4537-935b-5a03efe42ce8-config" (OuterVolumeSpecName: "config") pod "d89aa469-fd41-4537-935b-5a03efe42ce8" (UID: "d89aa469-fd41-4537-935b-5a03efe42ce8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:40:29 crc kubenswrapper[4796]: I1205 10:40:29.079710 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d89aa469-fd41-4537-935b-5a03efe42ce8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:29 crc kubenswrapper[4796]: I1205 10:40:29.079742 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n22b\" (UniqueName: \"kubernetes.io/projected/d89aa469-fd41-4537-935b-5a03efe42ce8-kube-api-access-9n22b\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:29 crc kubenswrapper[4796]: I1205 10:40:29.079755 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d89aa469-fd41-4537-935b-5a03efe42ce8-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:29 crc kubenswrapper[4796]: I1205 10:40:29.398877 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnpfk" event={"ID":"c56d752f-ca0a-4be7-b05b-381b1a5fdfc6","Type":"ContainerStarted","Data":"afe353db3259054ca2498847aa251d74d7c2dd266676b9656b20f1da5997425a"} Dec 05 10:40:29 crc kubenswrapper[4796]: I1205 10:40:29.400535 4796 generic.go:334] "Generic (PLEG): container finished" podID="f90214e2-6d6d-42b7-8a46-0fb779d31cba" containerID="e15c8b7bdb071db27a222623d6916a0191075f9d4f8a41ace5dfa733ad113faa" exitCode=0 Dec 05 10:40:29 crc kubenswrapper[4796]: I1205 10:40:29.400617 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z7p8z" event={"ID":"f90214e2-6d6d-42b7-8a46-0fb779d31cba","Type":"ContainerDied","Data":"e15c8b7bdb071db27a222623d6916a0191075f9d4f8a41ace5dfa733ad113faa"} Dec 05 10:40:29 crc kubenswrapper[4796]: I1205 10:40:29.402913 4796 generic.go:334] "Generic (PLEG): container finished" podID="d89aa469-fd41-4537-935b-5a03efe42ce8" containerID="d77ec8cb805aa6415249bc9354e453f61d0f8529aa3cf47c549e994891b1fb35" exitCode=0 Dec 05 10:40:29 crc kubenswrapper[4796]: I1205 10:40:29.402976 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bc4b48fc9-h6zwn" Dec 05 10:40:29 crc kubenswrapper[4796]: I1205 10:40:29.403003 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bc4b48fc9-h6zwn" event={"ID":"d89aa469-fd41-4537-935b-5a03efe42ce8","Type":"ContainerDied","Data":"d77ec8cb805aa6415249bc9354e453f61d0f8529aa3cf47c549e994891b1fb35"} Dec 05 10:40:29 crc kubenswrapper[4796]: I1205 10:40:29.403047 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bc4b48fc9-h6zwn" event={"ID":"d89aa469-fd41-4537-935b-5a03efe42ce8","Type":"ContainerDied","Data":"a7ed644e42d27c8c6cb05e57a930870cc63ddc8f619ff9328e13958557ac5076"} Dec 05 10:40:29 crc kubenswrapper[4796]: I1205 10:40:29.403070 4796 scope.go:117] "RemoveContainer" containerID="d77ec8cb805aa6415249bc9354e453f61d0f8529aa3cf47c549e994891b1fb35" Dec 05 10:40:29 crc kubenswrapper[4796]: I1205 10:40:29.409273 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7c35af0e-1df4-4529-a60f-3be3faaf8ec2","Type":"ContainerStarted","Data":"87416227d20d6bba20894c02ff70d690ab05852e9ac69c4a7e157b952489fa6d"} Dec 05 10:40:29 crc kubenswrapper[4796]: I1205 10:40:29.411405 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f735325f-6e38-45a2-a5bd-9ad19c40b36f","Type":"ContainerStarted","Data":"90d497911455b9ebcc3523bfde3fcf72dc248a8643992a0cc697cee2006b0b31"} Dec 05 10:40:29 crc kubenswrapper[4796]: I1205 10:40:29.413175 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"929f06e1-44b6-4ce2-9391-4d41a94538fb","Type":"ContainerStarted","Data":"9caba8011587579f300d2e8533089b54ca781f184080d885ef865c83e3ff79d7"} Dec 05 10:40:29 crc kubenswrapper[4796]: I1205 10:40:29.418600 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1a61d456-9eea-447f-b576-77473222d108","Type":"ContainerStarted","Data":"41edc440f58fb63cf7fd571cf2e1576f6d345add64c3aa95d2ab5e12423231cf"} Dec 05 10:40:29 crc kubenswrapper[4796]: I1205 10:40:29.420628 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dnpfk" podStartSLOduration=8.862809155 podStartE2EDuration="16.420611367s" podCreationTimestamp="2025-12-05 10:40:13 +0000 UTC" firstStartedPulling="2025-12-05 10:40:21.485038548 +0000 UTC m=+767.773144061" lastFinishedPulling="2025-12-05 10:40:29.04284076 +0000 UTC m=+775.330946273" observedRunningTime="2025-12-05 10:40:29.41881553 +0000 UTC m=+775.706921043" watchObservedRunningTime="2025-12-05 10:40:29.420611367 +0000 UTC m=+775.708716881" Dec 05 10:40:29 crc kubenswrapper[4796]: I1205 10:40:29.512840 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bc4b48fc9-h6zwn"] Dec 05 10:40:29 crc kubenswrapper[4796]: I1205 10:40:29.517844 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bc4b48fc9-h6zwn"] Dec 05 10:40:30 crc kubenswrapper[4796]: I1205 10:40:30.044267 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d89aa469-fd41-4537-935b-5a03efe42ce8" path="/var/lib/kubelet/pods/d89aa469-fd41-4537-935b-5a03efe42ce8/volumes" Dec 05 10:40:30 crc kubenswrapper[4796]: I1205 10:40:30.319355 4796 scope.go:117] "RemoveContainer" containerID="9592d37190b7a01734fdb2d5a9a25ca32b851bc9328d6aba100851a3d0ef5795" Dec 05 10:40:30 crc kubenswrapper[4796]: I1205 10:40:30.490063 4796 scope.go:117] "RemoveContainer" containerID="d77ec8cb805aa6415249bc9354e453f61d0f8529aa3cf47c549e994891b1fb35" Dec 05 10:40:30 crc kubenswrapper[4796]: E1205 10:40:30.490646 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d77ec8cb805aa6415249bc9354e453f61d0f8529aa3cf47c549e994891b1fb35\": container with ID starting with d77ec8cb805aa6415249bc9354e453f61d0f8529aa3cf47c549e994891b1fb35 not found: ID does not exist" containerID="d77ec8cb805aa6415249bc9354e453f61d0f8529aa3cf47c549e994891b1fb35" Dec 05 10:40:30 crc kubenswrapper[4796]: I1205 10:40:30.490668 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d77ec8cb805aa6415249bc9354e453f61d0f8529aa3cf47c549e994891b1fb35"} err="failed to get container status \"d77ec8cb805aa6415249bc9354e453f61d0f8529aa3cf47c549e994891b1fb35\": rpc error: code = NotFound desc = could not find container \"d77ec8cb805aa6415249bc9354e453f61d0f8529aa3cf47c549e994891b1fb35\": container with ID starting with d77ec8cb805aa6415249bc9354e453f61d0f8529aa3cf47c549e994891b1fb35 not found: ID does not exist" Dec 05 10:40:30 crc kubenswrapper[4796]: I1205 10:40:30.490703 4796 scope.go:117] "RemoveContainer" containerID="9592d37190b7a01734fdb2d5a9a25ca32b851bc9328d6aba100851a3d0ef5795" Dec 05 10:40:30 crc kubenswrapper[4796]: E1205 10:40:30.490903 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9592d37190b7a01734fdb2d5a9a25ca32b851bc9328d6aba100851a3d0ef5795\": container with ID starting with 9592d37190b7a01734fdb2d5a9a25ca32b851bc9328d6aba100851a3d0ef5795 not found: ID does not exist" containerID="9592d37190b7a01734fdb2d5a9a25ca32b851bc9328d6aba100851a3d0ef5795" Dec 05 10:40:30 crc kubenswrapper[4796]: I1205 10:40:30.490919 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9592d37190b7a01734fdb2d5a9a25ca32b851bc9328d6aba100851a3d0ef5795"} err="failed to get container status \"9592d37190b7a01734fdb2d5a9a25ca32b851bc9328d6aba100851a3d0ef5795\": rpc error: code = NotFound desc = could not find container \"9592d37190b7a01734fdb2d5a9a25ca32b851bc9328d6aba100851a3d0ef5795\": container with ID starting with 9592d37190b7a01734fdb2d5a9a25ca32b851bc9328d6aba100851a3d0ef5795 not found: ID does not exist" Dec 05 10:40:31 crc kubenswrapper[4796]: E1205 10:40:31.363244 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode649ad8d_0ed0_495c_9abc_7220d750f060.slice/crio-f6edee226d4f06edb7cf704432266a915b34de142ec0377b09bcd55bc742b371.scope\": RecentStats: unable to find data in memory cache]" Dec 05 10:40:33 crc kubenswrapper[4796]: I1205 10:40:33.456308 4796 generic.go:334] "Generic (PLEG): container finished" podID="0deffc65-bff4-419f-aa12-2c17432112a3" containerID="78e5cf0b30b9ff766b08825fb641d13511b22fa49fafceba3d691566df22c5f5" exitCode=0 Dec 05 10:40:33 crc kubenswrapper[4796]: I1205 10:40:33.456423 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0deffc65-bff4-419f-aa12-2c17432112a3","Type":"ContainerDied","Data":"78e5cf0b30b9ff766b08825fb641d13511b22fa49fafceba3d691566df22c5f5"} Dec 05 10:40:33 crc kubenswrapper[4796]: I1205 10:40:33.458425 4796 generic.go:334] "Generic (PLEG): container finished" podID="e649ad8d-0ed0-495c-9abc-7220d750f060" containerID="f6edee226d4f06edb7cf704432266a915b34de142ec0377b09bcd55bc742b371" exitCode=0 Dec 05 10:40:33 crc kubenswrapper[4796]: I1205 10:40:33.458469 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e649ad8d-0ed0-495c-9abc-7220d750f060","Type":"ContainerDied","Data":"f6edee226d4f06edb7cf704432266a915b34de142ec0377b09bcd55bc742b371"} Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:33.999965 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dnpfk" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.000343 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dnpfk" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.038912 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dnpfk" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.485545 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-thfdb" event={"ID":"dbbb0427-a102-4a95-a44e-d809d4334090","Type":"ContainerStarted","Data":"b74744bd6461e9d80496beac7c921f949507eb425ecbcc72f4afb3e7ce46edcd"} Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.487982 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"929f06e1-44b6-4ce2-9391-4d41a94538fb","Type":"ContainerStarted","Data":"568153f09ac864dbdcd8783bc5c72afa79b1100b3f89953563077fc191909d36"} Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.508193 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-thfdb" podStartSLOduration=10.454407919 podStartE2EDuration="13.508172865s" podCreationTimestamp="2025-12-05 10:40:21 +0000 UTC" firstStartedPulling="2025-12-05 10:40:27.465795779 +0000 UTC m=+773.753901292" lastFinishedPulling="2025-12-05 10:40:30.519560726 +0000 UTC m=+776.807666238" observedRunningTime="2025-12-05 10:40:34.497033207 +0000 UTC m=+780.785138721" watchObservedRunningTime="2025-12-05 10:40:34.508172865 +0000 UTC m=+780.796278378" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.525459 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=9.337783837 podStartE2EDuration="12.525440611s" podCreationTimestamp="2025-12-05 10:40:22 +0000 UTC" firstStartedPulling="2025-12-05 10:40:27.75348349 +0000 UTC m=+774.041589002" lastFinishedPulling="2025-12-05 10:40:30.941140264 +0000 UTC m=+777.229245776" observedRunningTime="2025-12-05 10:40:34.51594072 +0000 UTC m=+780.804046233" watchObservedRunningTime="2025-12-05 10:40:34.525440611 +0000 UTC m=+780.813546125" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.541337 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dnpfk" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.587873 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.607864 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dnpfk"] Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.711594 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57db9b5bc9-lll7l"] Dec 05 10:40:34 crc kubenswrapper[4796]: E1205 10:40:34.712243 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89aa469-fd41-4537-935b-5a03efe42ce8" containerName="init" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.712256 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89aa469-fd41-4537-935b-5a03efe42ce8" containerName="init" Dec 05 10:40:34 crc kubenswrapper[4796]: E1205 10:40:34.712269 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d89aa469-fd41-4537-935b-5a03efe42ce8" containerName="dnsmasq-dns" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.712274 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d89aa469-fd41-4537-935b-5a03efe42ce8" containerName="dnsmasq-dns" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.712438 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d89aa469-fd41-4537-935b-5a03efe42ce8" containerName="dnsmasq-dns" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.713267 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57db9b5bc9-lll7l" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.716144 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.723355 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57db9b5bc9-lll7l"] Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.779038 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92a309cc-d0ff-4997-b17e-6b45583bf286-config\") pod \"dnsmasq-dns-57db9b5bc9-lll7l\" (UID: \"92a309cc-d0ff-4997-b17e-6b45583bf286\") " pod="openstack/dnsmasq-dns-57db9b5bc9-lll7l" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.779083 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92a309cc-d0ff-4997-b17e-6b45583bf286-ovsdbserver-nb\") pod \"dnsmasq-dns-57db9b5bc9-lll7l\" (UID: \"92a309cc-d0ff-4997-b17e-6b45583bf286\") " pod="openstack/dnsmasq-dns-57db9b5bc9-lll7l" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.779291 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4c45\" (UniqueName: \"kubernetes.io/projected/92a309cc-d0ff-4997-b17e-6b45583bf286-kube-api-access-k4c45\") pod \"dnsmasq-dns-57db9b5bc9-lll7l\" (UID: \"92a309cc-d0ff-4997-b17e-6b45583bf286\") " pod="openstack/dnsmasq-dns-57db9b5bc9-lll7l" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.779425 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92a309cc-d0ff-4997-b17e-6b45583bf286-dns-svc\") pod \"dnsmasq-dns-57db9b5bc9-lll7l\" (UID: \"92a309cc-d0ff-4997-b17e-6b45583bf286\") " pod="openstack/dnsmasq-dns-57db9b5bc9-lll7l" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.881620 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92a309cc-d0ff-4997-b17e-6b45583bf286-dns-svc\") pod \"dnsmasq-dns-57db9b5bc9-lll7l\" (UID: \"92a309cc-d0ff-4997-b17e-6b45583bf286\") " pod="openstack/dnsmasq-dns-57db9b5bc9-lll7l" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.881721 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92a309cc-d0ff-4997-b17e-6b45583bf286-config\") pod \"dnsmasq-dns-57db9b5bc9-lll7l\" (UID: \"92a309cc-d0ff-4997-b17e-6b45583bf286\") " pod="openstack/dnsmasq-dns-57db9b5bc9-lll7l" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.881746 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92a309cc-d0ff-4997-b17e-6b45583bf286-ovsdbserver-nb\") pod \"dnsmasq-dns-57db9b5bc9-lll7l\" (UID: \"92a309cc-d0ff-4997-b17e-6b45583bf286\") " pod="openstack/dnsmasq-dns-57db9b5bc9-lll7l" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.881821 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4c45\" (UniqueName: \"kubernetes.io/projected/92a309cc-d0ff-4997-b17e-6b45583bf286-kube-api-access-k4c45\") pod \"dnsmasq-dns-57db9b5bc9-lll7l\" (UID: \"92a309cc-d0ff-4997-b17e-6b45583bf286\") " pod="openstack/dnsmasq-dns-57db9b5bc9-lll7l" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.883086 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92a309cc-d0ff-4997-b17e-6b45583bf286-config\") pod \"dnsmasq-dns-57db9b5bc9-lll7l\" (UID: \"92a309cc-d0ff-4997-b17e-6b45583bf286\") " pod="openstack/dnsmasq-dns-57db9b5bc9-lll7l" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.883140 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92a309cc-d0ff-4997-b17e-6b45583bf286-ovsdbserver-nb\") pod \"dnsmasq-dns-57db9b5bc9-lll7l\" (UID: \"92a309cc-d0ff-4997-b17e-6b45583bf286\") " pod="openstack/dnsmasq-dns-57db9b5bc9-lll7l" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.883136 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92a309cc-d0ff-4997-b17e-6b45583bf286-dns-svc\") pod \"dnsmasq-dns-57db9b5bc9-lll7l\" (UID: \"92a309cc-d0ff-4997-b17e-6b45583bf286\") " pod="openstack/dnsmasq-dns-57db9b5bc9-lll7l" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.904468 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4c45\" (UniqueName: \"kubernetes.io/projected/92a309cc-d0ff-4997-b17e-6b45583bf286-kube-api-access-k4c45\") pod \"dnsmasq-dns-57db9b5bc9-lll7l\" (UID: \"92a309cc-d0ff-4997-b17e-6b45583bf286\") " pod="openstack/dnsmasq-dns-57db9b5bc9-lll7l" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.924929 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57db9b5bc9-lll7l"] Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.925769 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57db9b5bc9-lll7l" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.952267 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-db7757ddc-flspl"] Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.959766 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-db7757ddc-flspl" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.961922 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 05 10:40:34 crc kubenswrapper[4796]: I1205 10:40:34.974572 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-db7757ddc-flspl"] Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.086661 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-ovsdbserver-nb\") pod \"dnsmasq-dns-db7757ddc-flspl\" (UID: \"67f1c191-39ec-4814-b042-e7757b84a4a3\") " pod="openstack/dnsmasq-dns-db7757ddc-flspl" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.086759 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-ovsdbserver-sb\") pod \"dnsmasq-dns-db7757ddc-flspl\" (UID: \"67f1c191-39ec-4814-b042-e7757b84a4a3\") " pod="openstack/dnsmasq-dns-db7757ddc-flspl" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.086821 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cljq\" (UniqueName: \"kubernetes.io/projected/67f1c191-39ec-4814-b042-e7757b84a4a3-kube-api-access-5cljq\") pod \"dnsmasq-dns-db7757ddc-flspl\" (UID: \"67f1c191-39ec-4814-b042-e7757b84a4a3\") " pod="openstack/dnsmasq-dns-db7757ddc-flspl" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.086871 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-config\") pod \"dnsmasq-dns-db7757ddc-flspl\" (UID: \"67f1c191-39ec-4814-b042-e7757b84a4a3\") " pod="openstack/dnsmasq-dns-db7757ddc-flspl" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.086900 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-dns-svc\") pod \"dnsmasq-dns-db7757ddc-flspl\" (UID: \"67f1c191-39ec-4814-b042-e7757b84a4a3\") " pod="openstack/dnsmasq-dns-db7757ddc-flspl" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.188564 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-ovsdbserver-sb\") pod \"dnsmasq-dns-db7757ddc-flspl\" (UID: \"67f1c191-39ec-4814-b042-e7757b84a4a3\") " pod="openstack/dnsmasq-dns-db7757ddc-flspl" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.188623 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cljq\" (UniqueName: \"kubernetes.io/projected/67f1c191-39ec-4814-b042-e7757b84a4a3-kube-api-access-5cljq\") pod \"dnsmasq-dns-db7757ddc-flspl\" (UID: \"67f1c191-39ec-4814-b042-e7757b84a4a3\") " pod="openstack/dnsmasq-dns-db7757ddc-flspl" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.188659 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-config\") pod \"dnsmasq-dns-db7757ddc-flspl\" (UID: \"67f1c191-39ec-4814-b042-e7757b84a4a3\") " pod="openstack/dnsmasq-dns-db7757ddc-flspl" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.188702 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-dns-svc\") pod \"dnsmasq-dns-db7757ddc-flspl\" (UID: \"67f1c191-39ec-4814-b042-e7757b84a4a3\") " pod="openstack/dnsmasq-dns-db7757ddc-flspl" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.188764 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-ovsdbserver-nb\") pod \"dnsmasq-dns-db7757ddc-flspl\" (UID: \"67f1c191-39ec-4814-b042-e7757b84a4a3\") " pod="openstack/dnsmasq-dns-db7757ddc-flspl" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.189557 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-ovsdbserver-sb\") pod \"dnsmasq-dns-db7757ddc-flspl\" (UID: \"67f1c191-39ec-4814-b042-e7757b84a4a3\") " pod="openstack/dnsmasq-dns-db7757ddc-flspl" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.189578 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-dns-svc\") pod \"dnsmasq-dns-db7757ddc-flspl\" (UID: \"67f1c191-39ec-4814-b042-e7757b84a4a3\") " pod="openstack/dnsmasq-dns-db7757ddc-flspl" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.189885 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-ovsdbserver-nb\") pod \"dnsmasq-dns-db7757ddc-flspl\" (UID: \"67f1c191-39ec-4814-b042-e7757b84a4a3\") " pod="openstack/dnsmasq-dns-db7757ddc-flspl" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.190270 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-config\") pod \"dnsmasq-dns-db7757ddc-flspl\" (UID: \"67f1c191-39ec-4814-b042-e7757b84a4a3\") " pod="openstack/dnsmasq-dns-db7757ddc-flspl" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.203742 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cljq\" (UniqueName: \"kubernetes.io/projected/67f1c191-39ec-4814-b042-e7757b84a4a3-kube-api-access-5cljq\") pod \"dnsmasq-dns-db7757ddc-flspl\" (UID: \"67f1c191-39ec-4814-b042-e7757b84a4a3\") " pod="openstack/dnsmasq-dns-db7757ddc-flspl" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.273396 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-db7757ddc-flspl" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.320618 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57db9b5bc9-lll7l"] Dec 05 10:40:35 crc kubenswrapper[4796]: W1205 10:40:35.326850 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92a309cc_d0ff_4997_b17e_6b45583bf286.slice/crio-4cc7307183c4b1bde9342881b22c03fec8043f28fe07b756a577ef72c636bb44 WatchSource:0}: Error finding container 4cc7307183c4b1bde9342881b22c03fec8043f28fe07b756a577ef72c636bb44: Status 404 returned error can't find the container with id 4cc7307183c4b1bde9342881b22c03fec8043f28fe07b756a577ef72c636bb44 Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.436455 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.467966 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.501736 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0deffc65-bff4-419f-aa12-2c17432112a3","Type":"ContainerStarted","Data":"d510900d07b416723dffaf59b3f6ea35c61fe3f8db375c41d817c6020fe5be84"} Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.504230 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z7p8z" event={"ID":"f90214e2-6d6d-42b7-8a46-0fb779d31cba","Type":"ContainerStarted","Data":"a21caa6cea2cf000b163fb7800f34854faae103d837e60d176032ec2a3689ba7"} Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.504277 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z7p8z" event={"ID":"f90214e2-6d6d-42b7-8a46-0fb779d31cba","Type":"ContainerStarted","Data":"0e8ce60d17c447fd92a0421393d69e596d802c4b9c3165ecebbbe379daa5fe2b"} Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.504307 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.506137 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7c35af0e-1df4-4529-a60f-3be3faaf8ec2","Type":"ContainerStarted","Data":"724f593723354f58920afb2eb0cc19da22fd07453ed87ec18e72997fa941ade3"} Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.507642 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57db9b5bc9-lll7l" event={"ID":"92a309cc-d0ff-4997-b17e-6b45583bf286","Type":"ContainerStarted","Data":"2de06d35a2f0c9dc4321408dcf417ebe6a9b8168e06484672e489a2fc147561c"} Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.507668 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57db9b5bc9-lll7l" event={"ID":"92a309cc-d0ff-4997-b17e-6b45583bf286","Type":"ContainerStarted","Data":"4cc7307183c4b1bde9342881b22c03fec8043f28fe07b756a577ef72c636bb44"} Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.509245 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"e649ad8d-0ed0-495c-9abc-7220d750f060","Type":"ContainerStarted","Data":"15253e665eb4bb025593c8846ab975ab20d2fecf70a6cd37f5554264df04e220"} Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.509867 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.518327 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=16.427636165 podStartE2EDuration="23.518314734s" podCreationTimestamp="2025-12-05 10:40:12 +0000 UTC" firstStartedPulling="2025-12-05 10:40:20.128987644 +0000 UTC m=+766.417093157" lastFinishedPulling="2025-12-05 10:40:27.219666213 +0000 UTC m=+773.507771726" observedRunningTime="2025-12-05 10:40:35.513455503 +0000 UTC m=+781.801561016" watchObservedRunningTime="2025-12-05 10:40:35.518314734 +0000 UTC m=+781.806420248" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.529504 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=7.634244558 podStartE2EDuration="14.529492614s" podCreationTimestamp="2025-12-05 10:40:21 +0000 UTC" firstStartedPulling="2025-12-05 10:40:27.666447724 +0000 UTC m=+773.954553237" lastFinishedPulling="2025-12-05 10:40:34.56169578 +0000 UTC m=+780.849801293" observedRunningTime="2025-12-05 10:40:35.526661227 +0000 UTC m=+781.814766740" watchObservedRunningTime="2025-12-05 10:40:35.529492614 +0000 UTC m=+781.817598128" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.547571 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=17.459012816 podStartE2EDuration="24.547557711s" podCreationTimestamp="2025-12-05 10:40:11 +0000 UTC" firstStartedPulling="2025-12-05 10:40:20.133977161 +0000 UTC m=+766.422082674" lastFinishedPulling="2025-12-05 10:40:27.222522056 +0000 UTC m=+773.510627569" observedRunningTime="2025-12-05 10:40:35.539762926 +0000 UTC m=+781.827868520" watchObservedRunningTime="2025-12-05 10:40:35.547557711 +0000 UTC m=+781.835663224" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.562296 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.570221 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-z7p8z" podStartSLOduration=10.4787934 podStartE2EDuration="16.570208596s" podCreationTimestamp="2025-12-05 10:40:19 +0000 UTC" firstStartedPulling="2025-12-05 10:40:21.115251329 +0000 UTC m=+767.403356843" lastFinishedPulling="2025-12-05 10:40:27.206666526 +0000 UTC m=+773.494772039" observedRunningTime="2025-12-05 10:40:35.568789514 +0000 UTC m=+781.856895028" watchObservedRunningTime="2025-12-05 10:40:35.570208596 +0000 UTC m=+781.858314108" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.641371 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-db7757ddc-flspl"] Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.754491 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57db9b5bc9-lll7l" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.798729 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92a309cc-d0ff-4997-b17e-6b45583bf286-dns-svc\") pod \"92a309cc-d0ff-4997-b17e-6b45583bf286\" (UID: \"92a309cc-d0ff-4997-b17e-6b45583bf286\") " Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.798784 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92a309cc-d0ff-4997-b17e-6b45583bf286-config\") pod \"92a309cc-d0ff-4997-b17e-6b45583bf286\" (UID: \"92a309cc-d0ff-4997-b17e-6b45583bf286\") " Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.798960 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92a309cc-d0ff-4997-b17e-6b45583bf286-ovsdbserver-nb\") pod \"92a309cc-d0ff-4997-b17e-6b45583bf286\" (UID: \"92a309cc-d0ff-4997-b17e-6b45583bf286\") " Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.799067 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4c45\" (UniqueName: \"kubernetes.io/projected/92a309cc-d0ff-4997-b17e-6b45583bf286-kube-api-access-k4c45\") pod \"92a309cc-d0ff-4997-b17e-6b45583bf286\" (UID: \"92a309cc-d0ff-4997-b17e-6b45583bf286\") " Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.803885 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92a309cc-d0ff-4997-b17e-6b45583bf286-kube-api-access-k4c45" (OuterVolumeSpecName: "kube-api-access-k4c45") pod "92a309cc-d0ff-4997-b17e-6b45583bf286" (UID: "92a309cc-d0ff-4997-b17e-6b45583bf286"). InnerVolumeSpecName "kube-api-access-k4c45". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.814322 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92a309cc-d0ff-4997-b17e-6b45583bf286-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "92a309cc-d0ff-4997-b17e-6b45583bf286" (UID: "92a309cc-d0ff-4997-b17e-6b45583bf286"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.818464 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92a309cc-d0ff-4997-b17e-6b45583bf286-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "92a309cc-d0ff-4997-b17e-6b45583bf286" (UID: "92a309cc-d0ff-4997-b17e-6b45583bf286"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.820146 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92a309cc-d0ff-4997-b17e-6b45583bf286-config" (OuterVolumeSpecName: "config") pod "92a309cc-d0ff-4997-b17e-6b45583bf286" (UID: "92a309cc-d0ff-4997-b17e-6b45583bf286"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.900882 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4c45\" (UniqueName: \"kubernetes.io/projected/92a309cc-d0ff-4997-b17e-6b45583bf286-kube-api-access-k4c45\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.900913 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/92a309cc-d0ff-4997-b17e-6b45583bf286-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.900925 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92a309cc-d0ff-4997-b17e-6b45583bf286-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:35 crc kubenswrapper[4796]: I1205 10:40:35.900937 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/92a309cc-d0ff-4997-b17e-6b45583bf286-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.439487 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-db7757ddc-flspl"] Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.453489 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.477495 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59d5fbdd8c-tgjp2"] Dec 05 10:40:36 crc kubenswrapper[4796]: E1205 10:40:36.477775 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92a309cc-d0ff-4997-b17e-6b45583bf286" containerName="init" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.477792 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="92a309cc-d0ff-4997-b17e-6b45583bf286" containerName="init" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.477944 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="92a309cc-d0ff-4997-b17e-6b45583bf286" containerName="init" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.478632 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.508840 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59d5fbdd8c-tgjp2"] Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.514565 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5fbdd8c-tgjp2\" (UID: \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\") " pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.514598 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7qbr\" (UniqueName: \"kubernetes.io/projected/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-kube-api-access-n7qbr\") pod \"dnsmasq-dns-59d5fbdd8c-tgjp2\" (UID: \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\") " pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.514630 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5fbdd8c-tgjp2\" (UID: \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\") " pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.514648 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-dns-svc\") pod \"dnsmasq-dns-59d5fbdd8c-tgjp2\" (UID: \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\") " pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.514694 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-config\") pod \"dnsmasq-dns-59d5fbdd8c-tgjp2\" (UID: \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\") " pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.529466 4796 generic.go:334] "Generic (PLEG): container finished" podID="67f1c191-39ec-4814-b042-e7757b84a4a3" containerID="05b765c0449976fc29e926c9be388b6b232018125e41884f6d675d21de496c37" exitCode=0 Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.529519 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-db7757ddc-flspl" event={"ID":"67f1c191-39ec-4814-b042-e7757b84a4a3","Type":"ContainerDied","Data":"05b765c0449976fc29e926c9be388b6b232018125e41884f6d675d21de496c37"} Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.529542 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-db7757ddc-flspl" event={"ID":"67f1c191-39ec-4814-b042-e7757b84a4a3","Type":"ContainerStarted","Data":"402cb6a204cbf7cb3b9c90d4590382b1bfc004eca0e230c6f66bfaf36feba00d"} Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.535149 4796 generic.go:334] "Generic (PLEG): container finished" podID="92a309cc-d0ff-4997-b17e-6b45583bf286" containerID="2de06d35a2f0c9dc4321408dcf417ebe6a9b8168e06484672e489a2fc147561c" exitCode=0 Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.535995 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dnpfk" podUID="c56d752f-ca0a-4be7-b05b-381b1a5fdfc6" containerName="registry-server" containerID="cri-o://afe353db3259054ca2498847aa251d74d7c2dd266676b9656b20f1da5997425a" gracePeriod=2 Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.536155 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57db9b5bc9-lll7l" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.536459 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57db9b5bc9-lll7l" event={"ID":"92a309cc-d0ff-4997-b17e-6b45583bf286","Type":"ContainerDied","Data":"2de06d35a2f0c9dc4321408dcf417ebe6a9b8168e06484672e489a2fc147561c"} Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.536491 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.536501 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57db9b5bc9-lll7l" event={"ID":"92a309cc-d0ff-4997-b17e-6b45583bf286","Type":"ContainerDied","Data":"4cc7307183c4b1bde9342881b22c03fec8043f28fe07b756a577ef72c636bb44"} Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.536518 4796 scope.go:117] "RemoveContainer" containerID="2de06d35a2f0c9dc4321408dcf417ebe6a9b8168e06484672e489a2fc147561c" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.616173 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5fbdd8c-tgjp2\" (UID: \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\") " pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.616470 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7qbr\" (UniqueName: \"kubernetes.io/projected/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-kube-api-access-n7qbr\") pod \"dnsmasq-dns-59d5fbdd8c-tgjp2\" (UID: \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\") " pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.616532 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5fbdd8c-tgjp2\" (UID: \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\") " pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.616549 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-dns-svc\") pod \"dnsmasq-dns-59d5fbdd8c-tgjp2\" (UID: \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\") " pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.616608 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-config\") pod \"dnsmasq-dns-59d5fbdd8c-tgjp2\" (UID: \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\") " pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.629913 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-dns-svc\") pod \"dnsmasq-dns-59d5fbdd8c-tgjp2\" (UID: \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\") " pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.629950 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5fbdd8c-tgjp2\" (UID: \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\") " pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.630215 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5fbdd8c-tgjp2\" (UID: \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\") " pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.630283 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-config\") pod \"dnsmasq-dns-59d5fbdd8c-tgjp2\" (UID: \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\") " pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.682496 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57db9b5bc9-lll7l"] Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.703954 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7qbr\" (UniqueName: \"kubernetes.io/projected/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-kube-api-access-n7qbr\") pod \"dnsmasq-dns-59d5fbdd8c-tgjp2\" (UID: \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\") " pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.708078 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57db9b5bc9-lll7l"] Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.754290 4796 scope.go:117] "RemoveContainer" containerID="2de06d35a2f0c9dc4321408dcf417ebe6a9b8168e06484672e489a2fc147561c" Dec 05 10:40:36 crc kubenswrapper[4796]: E1205 10:40:36.756306 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2de06d35a2f0c9dc4321408dcf417ebe6a9b8168e06484672e489a2fc147561c\": container with ID starting with 2de06d35a2f0c9dc4321408dcf417ebe6a9b8168e06484672e489a2fc147561c not found: ID does not exist" containerID="2de06d35a2f0c9dc4321408dcf417ebe6a9b8168e06484672e489a2fc147561c" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.756372 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2de06d35a2f0c9dc4321408dcf417ebe6a9b8168e06484672e489a2fc147561c"} err="failed to get container status \"2de06d35a2f0c9dc4321408dcf417ebe6a9b8168e06484672e489a2fc147561c\": rpc error: code = NotFound desc = could not find container \"2de06d35a2f0c9dc4321408dcf417ebe6a9b8168e06484672e489a2fc147561c\": container with ID starting with 2de06d35a2f0c9dc4321408dcf417ebe6a9b8168e06484672e489a2fc147561c not found: ID does not exist" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.793356 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" Dec 05 10:40:36 crc kubenswrapper[4796]: E1205 10:40:36.847604 4796 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Dec 05 10:40:36 crc kubenswrapper[4796]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/67f1c191-39ec-4814-b042-e7757b84a4a3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 05 10:40:36 crc kubenswrapper[4796]: > podSandboxID="402cb6a204cbf7cb3b9c90d4590382b1bfc004eca0e230c6f66bfaf36feba00d" Dec 05 10:40:36 crc kubenswrapper[4796]: E1205 10:40:36.847857 4796 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 05 10:40:36 crc kubenswrapper[4796]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:42f5663a161307156673f86e5eaad59f842a4bf25824f48008e69ab18e4ba792,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n599h5cbh7ch5d4h66fh676hdbh546h95h88h5ffh55ch7fhch57ch687hddhc7h5fdh57dh674h56fh64ch98h9bh557h55dh646h54ch54fh5c4h597q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5cljq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-db7757ddc-flspl_openstack(67f1c191-39ec-4814-b042-e7757b84a4a3): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/67f1c191-39ec-4814-b042-e7757b84a4a3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 05 10:40:36 crc kubenswrapper[4796]: > logger="UnhandledError" Dec 05 10:40:36 crc kubenswrapper[4796]: E1205 10:40:36.849796 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/67f1c191-39ec-4814-b042-e7757b84a4a3/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-db7757ddc-flspl" podUID="67f1c191-39ec-4814-b042-e7757b84a4a3" Dec 05 10:40:36 crc kubenswrapper[4796]: I1205 10:40:36.993295 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dnpfk" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.130859 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c56d752f-ca0a-4be7-b05b-381b1a5fdfc6-utilities\") pod \"c56d752f-ca0a-4be7-b05b-381b1a5fdfc6\" (UID: \"c56d752f-ca0a-4be7-b05b-381b1a5fdfc6\") " Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.131058 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c56d752f-ca0a-4be7-b05b-381b1a5fdfc6-catalog-content\") pod \"c56d752f-ca0a-4be7-b05b-381b1a5fdfc6\" (UID: \"c56d752f-ca0a-4be7-b05b-381b1a5fdfc6\") " Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.131092 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9dr8\" (UniqueName: \"kubernetes.io/projected/c56d752f-ca0a-4be7-b05b-381b1a5fdfc6-kube-api-access-w9dr8\") pod \"c56d752f-ca0a-4be7-b05b-381b1a5fdfc6\" (UID: \"c56d752f-ca0a-4be7-b05b-381b1a5fdfc6\") " Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.132042 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c56d752f-ca0a-4be7-b05b-381b1a5fdfc6-utilities" (OuterVolumeSpecName: "utilities") pod "c56d752f-ca0a-4be7-b05b-381b1a5fdfc6" (UID: "c56d752f-ca0a-4be7-b05b-381b1a5fdfc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.136539 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c56d752f-ca0a-4be7-b05b-381b1a5fdfc6-kube-api-access-w9dr8" (OuterVolumeSpecName: "kube-api-access-w9dr8") pod "c56d752f-ca0a-4be7-b05b-381b1a5fdfc6" (UID: "c56d752f-ca0a-4be7-b05b-381b1a5fdfc6"). InnerVolumeSpecName "kube-api-access-w9dr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.146663 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c56d752f-ca0a-4be7-b05b-381b1a5fdfc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c56d752f-ca0a-4be7-b05b-381b1a5fdfc6" (UID: "c56d752f-ca0a-4be7-b05b-381b1a5fdfc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.212748 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59d5fbdd8c-tgjp2"] Dec 05 10:40:37 crc kubenswrapper[4796]: W1205 10:40:37.216804 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6eeb640e_fd19_4c09_b322_1aed1fa21fcc.slice/crio-3e9e2c7d11d983f436bea95f068e648b42ec460a68d95e4ad440ba38363d6c16 WatchSource:0}: Error finding container 3e9e2c7d11d983f436bea95f068e648b42ec460a68d95e4ad440ba38363d6c16: Status 404 returned error can't find the container with id 3e9e2c7d11d983f436bea95f068e648b42ec460a68d95e4ad440ba38363d6c16 Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.232843 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9dr8\" (UniqueName: \"kubernetes.io/projected/c56d752f-ca0a-4be7-b05b-381b1a5fdfc6-kube-api-access-w9dr8\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.232867 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c56d752f-ca0a-4be7-b05b-381b1a5fdfc6-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.232877 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c56d752f-ca0a-4be7-b05b-381b1a5fdfc6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.542699 4796 generic.go:334] "Generic (PLEG): container finished" podID="c56d752f-ca0a-4be7-b05b-381b1a5fdfc6" containerID="afe353db3259054ca2498847aa251d74d7c2dd266676b9656b20f1da5997425a" exitCode=0 Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.542780 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dnpfk" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.542778 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnpfk" event={"ID":"c56d752f-ca0a-4be7-b05b-381b1a5fdfc6","Type":"ContainerDied","Data":"afe353db3259054ca2498847aa251d74d7c2dd266676b9656b20f1da5997425a"} Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.543226 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnpfk" event={"ID":"c56d752f-ca0a-4be7-b05b-381b1a5fdfc6","Type":"ContainerDied","Data":"193fecf62968d4ede0e6e8123e9fc1fdc6cd228025aa167e350909a7a4554b36"} Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.543278 4796 scope.go:117] "RemoveContainer" containerID="afe353db3259054ca2498847aa251d74d7c2dd266676b9656b20f1da5997425a" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.545287 4796 generic.go:334] "Generic (PLEG): container finished" podID="6eeb640e-fd19-4c09-b322-1aed1fa21fcc" containerID="8230e065f8b73c6964b412b2d8240d769b663fa5bb26dc9e9e5a9e383a3a0899" exitCode=0 Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.545367 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" event={"ID":"6eeb640e-fd19-4c09-b322-1aed1fa21fcc","Type":"ContainerDied","Data":"8230e065f8b73c6964b412b2d8240d769b663fa5bb26dc9e9e5a9e383a3a0899"} Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.545386 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" event={"ID":"6eeb640e-fd19-4c09-b322-1aed1fa21fcc","Type":"ContainerStarted","Data":"3e9e2c7d11d983f436bea95f068e648b42ec460a68d95e4ad440ba38363d6c16"} Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.564670 4796 scope.go:117] "RemoveContainer" containerID="fdc068df61968815befec7b4d3f76c3b0e75231061974b8bf20acc52b6f8dec0" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.667327 4796 scope.go:117] "RemoveContainer" containerID="970266e6f8c4beaae3b9e78d35f869c388cc5af3909701199c94feda734c59ec" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.726329 4796 scope.go:117] "RemoveContainer" containerID="afe353db3259054ca2498847aa251d74d7c2dd266676b9656b20f1da5997425a" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.728324 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dnpfk"] Dec 05 10:40:37 crc kubenswrapper[4796]: E1205 10:40:37.733867 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afe353db3259054ca2498847aa251d74d7c2dd266676b9656b20f1da5997425a\": container with ID starting with afe353db3259054ca2498847aa251d74d7c2dd266676b9656b20f1da5997425a not found: ID does not exist" containerID="afe353db3259054ca2498847aa251d74d7c2dd266676b9656b20f1da5997425a" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.733904 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afe353db3259054ca2498847aa251d74d7c2dd266676b9656b20f1da5997425a"} err="failed to get container status \"afe353db3259054ca2498847aa251d74d7c2dd266676b9656b20f1da5997425a\": rpc error: code = NotFound desc = could not find container \"afe353db3259054ca2498847aa251d74d7c2dd266676b9656b20f1da5997425a\": container with ID starting with afe353db3259054ca2498847aa251d74d7c2dd266676b9656b20f1da5997425a not found: ID does not exist" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.733930 4796 scope.go:117] "RemoveContainer" containerID="fdc068df61968815befec7b4d3f76c3b0e75231061974b8bf20acc52b6f8dec0" Dec 05 10:40:37 crc kubenswrapper[4796]: E1205 10:40:37.734975 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdc068df61968815befec7b4d3f76c3b0e75231061974b8bf20acc52b6f8dec0\": container with ID starting with fdc068df61968815befec7b4d3f76c3b0e75231061974b8bf20acc52b6f8dec0 not found: ID does not exist" containerID="fdc068df61968815befec7b4d3f76c3b0e75231061974b8bf20acc52b6f8dec0" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.735011 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdc068df61968815befec7b4d3f76c3b0e75231061974b8bf20acc52b6f8dec0"} err="failed to get container status \"fdc068df61968815befec7b4d3f76c3b0e75231061974b8bf20acc52b6f8dec0\": rpc error: code = NotFound desc = could not find container \"fdc068df61968815befec7b4d3f76c3b0e75231061974b8bf20acc52b6f8dec0\": container with ID starting with fdc068df61968815befec7b4d3f76c3b0e75231061974b8bf20acc52b6f8dec0 not found: ID does not exist" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.735036 4796 scope.go:117] "RemoveContainer" containerID="970266e6f8c4beaae3b9e78d35f869c388cc5af3909701199c94feda734c59ec" Dec 05 10:40:37 crc kubenswrapper[4796]: E1205 10:40:37.735356 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"970266e6f8c4beaae3b9e78d35f869c388cc5af3909701199c94feda734c59ec\": container with ID starting with 970266e6f8c4beaae3b9e78d35f869c388cc5af3909701199c94feda734c59ec not found: ID does not exist" containerID="970266e6f8c4beaae3b9e78d35f869c388cc5af3909701199c94feda734c59ec" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.735377 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"970266e6f8c4beaae3b9e78d35f869c388cc5af3909701199c94feda734c59ec"} err="failed to get container status \"970266e6f8c4beaae3b9e78d35f869c388cc5af3909701199c94feda734c59ec\": rpc error: code = NotFound desc = could not find container \"970266e6f8c4beaae3b9e78d35f869c388cc5af3909701199c94feda734c59ec\": container with ID starting with 970266e6f8c4beaae3b9e78d35f869c388cc5af3909701199c94feda734c59ec not found: ID does not exist" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.736089 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dnpfk"] Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.746328 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 05 10:40:37 crc kubenswrapper[4796]: E1205 10:40:37.746621 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56d752f-ca0a-4be7-b05b-381b1a5fdfc6" containerName="registry-server" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.746638 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56d752f-ca0a-4be7-b05b-381b1a5fdfc6" containerName="registry-server" Dec 05 10:40:37 crc kubenswrapper[4796]: E1205 10:40:37.746650 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56d752f-ca0a-4be7-b05b-381b1a5fdfc6" containerName="extract-content" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.746657 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56d752f-ca0a-4be7-b05b-381b1a5fdfc6" containerName="extract-content" Dec 05 10:40:37 crc kubenswrapper[4796]: E1205 10:40:37.746698 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56d752f-ca0a-4be7-b05b-381b1a5fdfc6" containerName="extract-utilities" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.746704 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56d752f-ca0a-4be7-b05b-381b1a5fdfc6" containerName="extract-utilities" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.746840 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56d752f-ca0a-4be7-b05b-381b1a5fdfc6" containerName="registry-server" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.750593 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.752980 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-7nkvg" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.755903 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.756416 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.756930 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.762121 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.786960 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-db7757ddc-flspl" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.843230 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drphl\" (UniqueName: \"kubernetes.io/projected/ff8476eb-20ea-41dc-97e0-d08619e42a30-kube-api-access-drphl\") pod \"swift-storage-0\" (UID: \"ff8476eb-20ea-41dc-97e0-d08619e42a30\") " pod="openstack/swift-storage-0" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.843302 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"ff8476eb-20ea-41dc-97e0-d08619e42a30\") " pod="openstack/swift-storage-0" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.843341 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ff8476eb-20ea-41dc-97e0-d08619e42a30-cache\") pod \"swift-storage-0\" (UID: \"ff8476eb-20ea-41dc-97e0-d08619e42a30\") " pod="openstack/swift-storage-0" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.843391 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ff8476eb-20ea-41dc-97e0-d08619e42a30-etc-swift\") pod \"swift-storage-0\" (UID: \"ff8476eb-20ea-41dc-97e0-d08619e42a30\") " pod="openstack/swift-storage-0" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.843415 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ff8476eb-20ea-41dc-97e0-d08619e42a30-lock\") pod \"swift-storage-0\" (UID: \"ff8476eb-20ea-41dc-97e0-d08619e42a30\") " pod="openstack/swift-storage-0" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.945024 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cljq\" (UniqueName: \"kubernetes.io/projected/67f1c191-39ec-4814-b042-e7757b84a4a3-kube-api-access-5cljq\") pod \"67f1c191-39ec-4814-b042-e7757b84a4a3\" (UID: \"67f1c191-39ec-4814-b042-e7757b84a4a3\") " Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.945581 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-ovsdbserver-sb\") pod \"67f1c191-39ec-4814-b042-e7757b84a4a3\" (UID: \"67f1c191-39ec-4814-b042-e7757b84a4a3\") " Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.945765 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-dns-svc\") pod \"67f1c191-39ec-4814-b042-e7757b84a4a3\" (UID: \"67f1c191-39ec-4814-b042-e7757b84a4a3\") " Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.945976 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-config\") pod \"67f1c191-39ec-4814-b042-e7757b84a4a3\" (UID: \"67f1c191-39ec-4814-b042-e7757b84a4a3\") " Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.946167 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-ovsdbserver-nb\") pod \"67f1c191-39ec-4814-b042-e7757b84a4a3\" (UID: \"67f1c191-39ec-4814-b042-e7757b84a4a3\") " Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.946807 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ff8476eb-20ea-41dc-97e0-d08619e42a30-cache\") pod \"swift-storage-0\" (UID: \"ff8476eb-20ea-41dc-97e0-d08619e42a30\") " pod="openstack/swift-storage-0" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.946962 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ff8476eb-20ea-41dc-97e0-d08619e42a30-etc-swift\") pod \"swift-storage-0\" (UID: \"ff8476eb-20ea-41dc-97e0-d08619e42a30\") " pod="openstack/swift-storage-0" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.947045 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ff8476eb-20ea-41dc-97e0-d08619e42a30-lock\") pod \"swift-storage-0\" (UID: \"ff8476eb-20ea-41dc-97e0-d08619e42a30\") " pod="openstack/swift-storage-0" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.947225 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drphl\" (UniqueName: \"kubernetes.io/projected/ff8476eb-20ea-41dc-97e0-d08619e42a30-kube-api-access-drphl\") pod \"swift-storage-0\" (UID: \"ff8476eb-20ea-41dc-97e0-d08619e42a30\") " pod="openstack/swift-storage-0" Dec 05 10:40:37 crc kubenswrapper[4796]: E1205 10:40:37.947588 4796 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 10:40:37 crc kubenswrapper[4796]: E1205 10:40:37.947623 4796 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.947597 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ff8476eb-20ea-41dc-97e0-d08619e42a30-cache\") pod \"swift-storage-0\" (UID: \"ff8476eb-20ea-41dc-97e0-d08619e42a30\") " pod="openstack/swift-storage-0" Dec 05 10:40:37 crc kubenswrapper[4796]: E1205 10:40:37.947704 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff8476eb-20ea-41dc-97e0-d08619e42a30-etc-swift podName:ff8476eb-20ea-41dc-97e0-d08619e42a30 nodeName:}" failed. No retries permitted until 2025-12-05 10:40:38.447669158 +0000 UTC m=+784.735774671 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ff8476eb-20ea-41dc-97e0-d08619e42a30-etc-swift") pod "swift-storage-0" (UID: "ff8476eb-20ea-41dc-97e0-d08619e42a30") : configmap "swift-ring-files" not found Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.947717 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ff8476eb-20ea-41dc-97e0-d08619e42a30-lock\") pod \"swift-storage-0\" (UID: \"ff8476eb-20ea-41dc-97e0-d08619e42a30\") " pod="openstack/swift-storage-0" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.947953 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"ff8476eb-20ea-41dc-97e0-d08619e42a30\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/swift-storage-0" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.947340 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"ff8476eb-20ea-41dc-97e0-d08619e42a30\") " pod="openstack/swift-storage-0" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.955000 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f1c191-39ec-4814-b042-e7757b84a4a3-kube-api-access-5cljq" (OuterVolumeSpecName: "kube-api-access-5cljq") pod "67f1c191-39ec-4814-b042-e7757b84a4a3" (UID: "67f1c191-39ec-4814-b042-e7757b84a4a3"). InnerVolumeSpecName "kube-api-access-5cljq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.970335 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drphl\" (UniqueName: \"kubernetes.io/projected/ff8476eb-20ea-41dc-97e0-d08619e42a30-kube-api-access-drphl\") pod \"swift-storage-0\" (UID: \"ff8476eb-20ea-41dc-97e0-d08619e42a30\") " pod="openstack/swift-storage-0" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.977178 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-42p7z"] Dec 05 10:40:37 crc kubenswrapper[4796]: E1205 10:40:37.977768 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f1c191-39ec-4814-b042-e7757b84a4a3" containerName="init" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.977792 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f1c191-39ec-4814-b042-e7757b84a4a3" containerName="init" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.978018 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="67f1c191-39ec-4814-b042-e7757b84a4a3" containerName="init" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.978912 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.980709 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.981031 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.981359 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 05 10:40:37 crc kubenswrapper[4796]: I1205 10:40:37.985792 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-42p7z"] Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:37.998010 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "67f1c191-39ec-4814-b042-e7757b84a4a3" (UID: "67f1c191-39ec-4814-b042-e7757b84a4a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.000442 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"swift-storage-0\" (UID: \"ff8476eb-20ea-41dc-97e0-d08619e42a30\") " pod="openstack/swift-storage-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.009545 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "67f1c191-39ec-4814-b042-e7757b84a4a3" (UID: "67f1c191-39ec-4814-b042-e7757b84a4a3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.014191 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "67f1c191-39ec-4814-b042-e7757b84a4a3" (UID: "67f1c191-39ec-4814-b042-e7757b84a4a3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.014828 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-config" (OuterVolumeSpecName: "config") pod "67f1c191-39ec-4814-b042-e7757b84a4a3" (UID: "67f1c191-39ec-4814-b042-e7757b84a4a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.041046 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92a309cc-d0ff-4997-b17e-6b45583bf286" path="/var/lib/kubelet/pods/92a309cc-d0ff-4997-b17e-6b45583bf286/volumes" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.041880 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c56d752f-ca0a-4be7-b05b-381b1a5fdfc6" path="/var/lib/kubelet/pods/c56d752f-ca0a-4be7-b05b-381b1a5fdfc6/volumes" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.049742 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-etc-swift\") pod \"swift-ring-rebalance-42p7z\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.049790 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-dispersionconf\") pod \"swift-ring-rebalance-42p7z\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.049837 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-swiftconf\") pod \"swift-ring-rebalance-42p7z\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.049858 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-ring-data-devices\") pod \"swift-ring-rebalance-42p7z\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.049918 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-combined-ca-bundle\") pod \"swift-ring-rebalance-42p7z\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.049947 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2m6x\" (UniqueName: \"kubernetes.io/projected/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-kube-api-access-s2m6x\") pod \"swift-ring-rebalance-42p7z\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.049973 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-scripts\") pod \"swift-ring-rebalance-42p7z\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.050042 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.050053 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.050064 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cljq\" (UniqueName: \"kubernetes.io/projected/67f1c191-39ec-4814-b042-e7757b84a4a3-kube-api-access-5cljq\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.050104 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.050132 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67f1c191-39ec-4814-b042-e7757b84a4a3-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.152348 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-dispersionconf\") pod \"swift-ring-rebalance-42p7z\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.152569 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-swiftconf\") pod \"swift-ring-rebalance-42p7z\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.152609 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-ring-data-devices\") pod \"swift-ring-rebalance-42p7z\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.152768 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-combined-ca-bundle\") pod \"swift-ring-rebalance-42p7z\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.152885 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2m6x\" (UniqueName: \"kubernetes.io/projected/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-kube-api-access-s2m6x\") pod \"swift-ring-rebalance-42p7z\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.152964 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-scripts\") pod \"swift-ring-rebalance-42p7z\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.153013 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-etc-swift\") pod \"swift-ring-rebalance-42p7z\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.153341 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-ring-data-devices\") pod \"swift-ring-rebalance-42p7z\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.153357 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-etc-swift\") pod \"swift-ring-rebalance-42p7z\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.154186 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-scripts\") pod \"swift-ring-rebalance-42p7z\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.155576 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-dispersionconf\") pod \"swift-ring-rebalance-42p7z\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.156632 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-swiftconf\") pod \"swift-ring-rebalance-42p7z\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.157466 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-combined-ca-bundle\") pod \"swift-ring-rebalance-42p7z\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.167266 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2m6x\" (UniqueName: \"kubernetes.io/projected/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-kube-api-access-s2m6x\") pod \"swift-ring-rebalance-42p7z\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.221189 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.221866 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.265435 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.304376 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.459251 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ff8476eb-20ea-41dc-97e0-d08619e42a30-etc-swift\") pod \"swift-storage-0\" (UID: \"ff8476eb-20ea-41dc-97e0-d08619e42a30\") " pod="openstack/swift-storage-0" Dec 05 10:40:38 crc kubenswrapper[4796]: E1205 10:40:38.459516 4796 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 10:40:38 crc kubenswrapper[4796]: E1205 10:40:38.459963 4796 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 10:40:38 crc kubenswrapper[4796]: E1205 10:40:38.460031 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff8476eb-20ea-41dc-97e0-d08619e42a30-etc-swift podName:ff8476eb-20ea-41dc-97e0-d08619e42a30 nodeName:}" failed. No retries permitted until 2025-12-05 10:40:39.460008867 +0000 UTC m=+785.748114380 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ff8476eb-20ea-41dc-97e0-d08619e42a30-etc-swift") pod "swift-storage-0" (UID: "ff8476eb-20ea-41dc-97e0-d08619e42a30") : configmap "swift-ring-files" not found Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.553009 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-db7757ddc-flspl" event={"ID":"67f1c191-39ec-4814-b042-e7757b84a4a3","Type":"ContainerDied","Data":"402cb6a204cbf7cb3b9c90d4590382b1bfc004eca0e230c6f66bfaf36feba00d"} Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.553037 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-db7757ddc-flspl" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.553058 4796 scope.go:117] "RemoveContainer" containerID="05b765c0449976fc29e926c9be388b6b232018125e41884f6d675d21de496c37" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.557231 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" event={"ID":"6eeb640e-fd19-4c09-b322-1aed1fa21fcc","Type":"ContainerStarted","Data":"48077c01c2702955978f8bdaa91830daaa7b76f5152ad6b6e90a7d204e8413e7"} Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.557307 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.572151 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" podStartSLOduration=2.572139214 podStartE2EDuration="2.572139214s" podCreationTimestamp="2025-12-05 10:40:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:40:38.570792139 +0000 UTC m=+784.858897663" watchObservedRunningTime="2025-12-05 10:40:38.572139214 +0000 UTC m=+784.860244727" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.585285 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.637381 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-db7757ddc-flspl"] Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.642101 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-db7757ddc-flspl"] Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.685162 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-42p7z"] Dec 05 10:40:38 crc kubenswrapper[4796]: W1205 10:40:38.691702 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5bfeb9c9_1808_4f43_b61b_4fafe36cda09.slice/crio-b6562008d81c35ee52a3e8e3b990a2355cd6df7879a65a5a20988833629f3bcf WatchSource:0}: Error finding container b6562008d81c35ee52a3e8e3b990a2355cd6df7879a65a5a20988833629f3bcf: Status 404 returned error can't find the container with id b6562008d81c35ee52a3e8e3b990a2355cd6df7879a65a5a20988833629f3bcf Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.757929 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.759053 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.762539 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.762722 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.763377 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.769297 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-hc6mh" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.773924 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.866610 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc8fr\" (UniqueName: \"kubernetes.io/projected/75432bf8-6355-495f-aa1d-94928e9b15ba-kube-api-access-cc8fr\") pod \"ovn-northd-0\" (UID: \"75432bf8-6355-495f-aa1d-94928e9b15ba\") " pod="openstack/ovn-northd-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.866661 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/75432bf8-6355-495f-aa1d-94928e9b15ba-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"75432bf8-6355-495f-aa1d-94928e9b15ba\") " pod="openstack/ovn-northd-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.866702 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75432bf8-6355-495f-aa1d-94928e9b15ba-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"75432bf8-6355-495f-aa1d-94928e9b15ba\") " pod="openstack/ovn-northd-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.866780 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/75432bf8-6355-495f-aa1d-94928e9b15ba-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"75432bf8-6355-495f-aa1d-94928e9b15ba\") " pod="openstack/ovn-northd-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.866845 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/75432bf8-6355-495f-aa1d-94928e9b15ba-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"75432bf8-6355-495f-aa1d-94928e9b15ba\") " pod="openstack/ovn-northd-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.866964 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75432bf8-6355-495f-aa1d-94928e9b15ba-scripts\") pod \"ovn-northd-0\" (UID: \"75432bf8-6355-495f-aa1d-94928e9b15ba\") " pod="openstack/ovn-northd-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.867004 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75432bf8-6355-495f-aa1d-94928e9b15ba-config\") pod \"ovn-northd-0\" (UID: \"75432bf8-6355-495f-aa1d-94928e9b15ba\") " pod="openstack/ovn-northd-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.968131 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75432bf8-6355-495f-aa1d-94928e9b15ba-config\") pod \"ovn-northd-0\" (UID: \"75432bf8-6355-495f-aa1d-94928e9b15ba\") " pod="openstack/ovn-northd-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.968195 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc8fr\" (UniqueName: \"kubernetes.io/projected/75432bf8-6355-495f-aa1d-94928e9b15ba-kube-api-access-cc8fr\") pod \"ovn-northd-0\" (UID: \"75432bf8-6355-495f-aa1d-94928e9b15ba\") " pod="openstack/ovn-northd-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.968226 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/75432bf8-6355-495f-aa1d-94928e9b15ba-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"75432bf8-6355-495f-aa1d-94928e9b15ba\") " pod="openstack/ovn-northd-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.968249 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75432bf8-6355-495f-aa1d-94928e9b15ba-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"75432bf8-6355-495f-aa1d-94928e9b15ba\") " pod="openstack/ovn-northd-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.968288 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/75432bf8-6355-495f-aa1d-94928e9b15ba-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"75432bf8-6355-495f-aa1d-94928e9b15ba\") " pod="openstack/ovn-northd-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.968338 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/75432bf8-6355-495f-aa1d-94928e9b15ba-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"75432bf8-6355-495f-aa1d-94928e9b15ba\") " pod="openstack/ovn-northd-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.968370 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75432bf8-6355-495f-aa1d-94928e9b15ba-scripts\") pod \"ovn-northd-0\" (UID: \"75432bf8-6355-495f-aa1d-94928e9b15ba\") " pod="openstack/ovn-northd-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.968743 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/75432bf8-6355-495f-aa1d-94928e9b15ba-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"75432bf8-6355-495f-aa1d-94928e9b15ba\") " pod="openstack/ovn-northd-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.969210 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75432bf8-6355-495f-aa1d-94928e9b15ba-scripts\") pod \"ovn-northd-0\" (UID: \"75432bf8-6355-495f-aa1d-94928e9b15ba\") " pod="openstack/ovn-northd-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.969469 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75432bf8-6355-495f-aa1d-94928e9b15ba-config\") pod \"ovn-northd-0\" (UID: \"75432bf8-6355-495f-aa1d-94928e9b15ba\") " pod="openstack/ovn-northd-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.975211 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/75432bf8-6355-495f-aa1d-94928e9b15ba-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"75432bf8-6355-495f-aa1d-94928e9b15ba\") " pod="openstack/ovn-northd-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.975904 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75432bf8-6355-495f-aa1d-94928e9b15ba-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"75432bf8-6355-495f-aa1d-94928e9b15ba\") " pod="openstack/ovn-northd-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.977579 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/75432bf8-6355-495f-aa1d-94928e9b15ba-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"75432bf8-6355-495f-aa1d-94928e9b15ba\") " pod="openstack/ovn-northd-0" Dec 05 10:40:38 crc kubenswrapper[4796]: I1205 10:40:38.983003 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc8fr\" (UniqueName: \"kubernetes.io/projected/75432bf8-6355-495f-aa1d-94928e9b15ba-kube-api-access-cc8fr\") pod \"ovn-northd-0\" (UID: \"75432bf8-6355-495f-aa1d-94928e9b15ba\") " pod="openstack/ovn-northd-0" Dec 05 10:40:39 crc kubenswrapper[4796]: I1205 10:40:39.075425 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 05 10:40:39 crc kubenswrapper[4796]: I1205 10:40:39.443272 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 05 10:40:39 crc kubenswrapper[4796]: W1205 10:40:39.444861 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75432bf8_6355_495f_aa1d_94928e9b15ba.slice/crio-cab1a5be0c125bdcf811cfe113a03597a6b4b3a9fc1d79f6e4043f069c1e5ebc WatchSource:0}: Error finding container cab1a5be0c125bdcf811cfe113a03597a6b4b3a9fc1d79f6e4043f069c1e5ebc: Status 404 returned error can't find the container with id cab1a5be0c125bdcf811cfe113a03597a6b4b3a9fc1d79f6e4043f069c1e5ebc Dec 05 10:40:39 crc kubenswrapper[4796]: I1205 10:40:39.477413 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ff8476eb-20ea-41dc-97e0-d08619e42a30-etc-swift\") pod \"swift-storage-0\" (UID: \"ff8476eb-20ea-41dc-97e0-d08619e42a30\") " pod="openstack/swift-storage-0" Dec 05 10:40:39 crc kubenswrapper[4796]: E1205 10:40:39.477622 4796 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 10:40:39 crc kubenswrapper[4796]: E1205 10:40:39.477661 4796 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 10:40:39 crc kubenswrapper[4796]: E1205 10:40:39.477758 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff8476eb-20ea-41dc-97e0-d08619e42a30-etc-swift podName:ff8476eb-20ea-41dc-97e0-d08619e42a30 nodeName:}" failed. No retries permitted until 2025-12-05 10:40:41.477741547 +0000 UTC m=+787.765847060 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ff8476eb-20ea-41dc-97e0-d08619e42a30-etc-swift") pod "swift-storage-0" (UID: "ff8476eb-20ea-41dc-97e0-d08619e42a30") : configmap "swift-ring-files" not found Dec 05 10:40:39 crc kubenswrapper[4796]: I1205 10:40:39.564412 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"75432bf8-6355-495f-aa1d-94928e9b15ba","Type":"ContainerStarted","Data":"cab1a5be0c125bdcf811cfe113a03597a6b4b3a9fc1d79f6e4043f069c1e5ebc"} Dec 05 10:40:39 crc kubenswrapper[4796]: I1205 10:40:39.565388 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-42p7z" event={"ID":"5bfeb9c9-1808-4f43-b61b-4fafe36cda09","Type":"ContainerStarted","Data":"b6562008d81c35ee52a3e8e3b990a2355cd6df7879a65a5a20988833629f3bcf"} Dec 05 10:40:40 crc kubenswrapper[4796]: I1205 10:40:40.039877 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67f1c191-39ec-4814-b042-e7757b84a4a3" path="/var/lib/kubelet/pods/67f1c191-39ec-4814-b042-e7757b84a4a3/volumes" Dec 05 10:40:41 crc kubenswrapper[4796]: I1205 10:40:41.518240 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ff8476eb-20ea-41dc-97e0-d08619e42a30-etc-swift\") pod \"swift-storage-0\" (UID: \"ff8476eb-20ea-41dc-97e0-d08619e42a30\") " pod="openstack/swift-storage-0" Dec 05 10:40:41 crc kubenswrapper[4796]: E1205 10:40:41.518477 4796 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 10:40:41 crc kubenswrapper[4796]: E1205 10:40:41.518560 4796 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 10:40:41 crc kubenswrapper[4796]: E1205 10:40:41.518642 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff8476eb-20ea-41dc-97e0-d08619e42a30-etc-swift podName:ff8476eb-20ea-41dc-97e0-d08619e42a30 nodeName:}" failed. No retries permitted until 2025-12-05 10:40:45.518617386 +0000 UTC m=+791.806722899 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ff8476eb-20ea-41dc-97e0-d08619e42a30-etc-swift") pod "swift-storage-0" (UID: "ff8476eb-20ea-41dc-97e0-d08619e42a30") : configmap "swift-ring-files" not found Dec 05 10:40:42 crc kubenswrapper[4796]: I1205 10:40:42.587987 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"75432bf8-6355-495f-aa1d-94928e9b15ba","Type":"ContainerStarted","Data":"94eb6345c07937f8e7b72064893a98a3aad64b465dbea356bf1f8750a9006ddf"} Dec 05 10:40:42 crc kubenswrapper[4796]: I1205 10:40:42.588724 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"75432bf8-6355-495f-aa1d-94928e9b15ba","Type":"ContainerStarted","Data":"a5e9374cbcd6a111c6f91a61b655653b13c2f46510809da34d0de35e28a31254"} Dec 05 10:40:42 crc kubenswrapper[4796]: I1205 10:40:42.588788 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 05 10:40:42 crc kubenswrapper[4796]: I1205 10:40:42.589972 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-42p7z" event={"ID":"5bfeb9c9-1808-4f43-b61b-4fafe36cda09","Type":"ContainerStarted","Data":"921c98048fe8fc1662210d2e9706b5b9d8dace0d02f10ba9eff8fba7e65de3f3"} Dec 05 10:40:42 crc kubenswrapper[4796]: I1205 10:40:42.610630 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.2002053950000002 podStartE2EDuration="4.61061783s" podCreationTimestamp="2025-12-05 10:40:38 +0000 UTC" firstStartedPulling="2025-12-05 10:40:39.446393801 +0000 UTC m=+785.734499305" lastFinishedPulling="2025-12-05 10:40:41.856806227 +0000 UTC m=+788.144911740" observedRunningTime="2025-12-05 10:40:42.60240047 +0000 UTC m=+788.890505983" watchObservedRunningTime="2025-12-05 10:40:42.61061783 +0000 UTC m=+788.898723343" Dec 05 10:40:42 crc kubenswrapper[4796]: I1205 10:40:42.625996 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-42p7z" podStartSLOduration=2.83151051 podStartE2EDuration="5.625980903s" podCreationTimestamp="2025-12-05 10:40:37 +0000 UTC" firstStartedPulling="2025-12-05 10:40:38.694171548 +0000 UTC m=+784.982277062" lastFinishedPulling="2025-12-05 10:40:41.488641952 +0000 UTC m=+787.776747455" observedRunningTime="2025-12-05 10:40:42.624912182 +0000 UTC m=+788.913017695" watchObservedRunningTime="2025-12-05 10:40:42.625980903 +0000 UTC m=+788.914086417" Dec 05 10:40:42 crc kubenswrapper[4796]: I1205 10:40:42.867397 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 05 10:40:42 crc kubenswrapper[4796]: I1205 10:40:42.867457 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 05 10:40:42 crc kubenswrapper[4796]: I1205 10:40:42.905316 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 05 10:40:43 crc kubenswrapper[4796]: I1205 10:40:43.634189 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.255766 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6dhkt"] Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.257169 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6dhkt" Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.267451 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.267493 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.268710 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6dhkt"] Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.307828 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.367102 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx5r7\" (UniqueName: \"kubernetes.io/projected/5a2c6f26-7d14-4cda-a3d0-901c0a9fcc80-kube-api-access-hx5r7\") pod \"keystone-db-create-6dhkt\" (UID: \"5a2c6f26-7d14-4cda-a3d0-901c0a9fcc80\") " pod="openstack/keystone-db-create-6dhkt" Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.468909 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx5r7\" (UniqueName: \"kubernetes.io/projected/5a2c6f26-7d14-4cda-a3d0-901c0a9fcc80-kube-api-access-hx5r7\") pod \"keystone-db-create-6dhkt\" (UID: \"5a2c6f26-7d14-4cda-a3d0-901c0a9fcc80\") " pod="openstack/keystone-db-create-6dhkt" Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.486558 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-22f2t"] Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.487866 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-22f2t" Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.492164 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-22f2t"] Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.493024 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx5r7\" (UniqueName: \"kubernetes.io/projected/5a2c6f26-7d14-4cda-a3d0-901c0a9fcc80-kube-api-access-hx5r7\") pod \"keystone-db-create-6dhkt\" (UID: \"5a2c6f26-7d14-4cda-a3d0-901c0a9fcc80\") " pod="openstack/keystone-db-create-6dhkt" Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.570417 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbfm4\" (UniqueName: \"kubernetes.io/projected/1e2da584-c796-4527-b9db-0b455d037fec-kube-api-access-vbfm4\") pod \"placement-db-create-22f2t\" (UID: \"1e2da584-c796-4527-b9db-0b455d037fec\") " pod="openstack/placement-db-create-22f2t" Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.575770 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6dhkt" Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.650902 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.681374 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbfm4\" (UniqueName: \"kubernetes.io/projected/1e2da584-c796-4527-b9db-0b455d037fec-kube-api-access-vbfm4\") pod \"placement-db-create-22f2t\" (UID: \"1e2da584-c796-4527-b9db-0b455d037fec\") " pod="openstack/placement-db-create-22f2t" Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.700894 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbfm4\" (UniqueName: \"kubernetes.io/projected/1e2da584-c796-4527-b9db-0b455d037fec-kube-api-access-vbfm4\") pod \"placement-db-create-22f2t\" (UID: \"1e2da584-c796-4527-b9db-0b455d037fec\") " pod="openstack/placement-db-create-22f2t" Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.807126 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-hjp4p"] Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.808532 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hjp4p" Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.812385 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-hjp4p"] Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.825445 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-22f2t" Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.892596 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb7sx\" (UniqueName: \"kubernetes.io/projected/a1794457-a1b4-4c7b-bc21-ba7acb558b2e-kube-api-access-bb7sx\") pod \"glance-db-create-hjp4p\" (UID: \"a1794457-a1b4-4c7b-bc21-ba7acb558b2e\") " pod="openstack/glance-db-create-hjp4p" Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.982077 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6dhkt"] Dec 05 10:40:44 crc kubenswrapper[4796]: I1205 10:40:44.993571 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb7sx\" (UniqueName: \"kubernetes.io/projected/a1794457-a1b4-4c7b-bc21-ba7acb558b2e-kube-api-access-bb7sx\") pod \"glance-db-create-hjp4p\" (UID: \"a1794457-a1b4-4c7b-bc21-ba7acb558b2e\") " pod="openstack/glance-db-create-hjp4p" Dec 05 10:40:45 crc kubenswrapper[4796]: I1205 10:40:45.009237 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb7sx\" (UniqueName: \"kubernetes.io/projected/a1794457-a1b4-4c7b-bc21-ba7acb558b2e-kube-api-access-bb7sx\") pod \"glance-db-create-hjp4p\" (UID: \"a1794457-a1b4-4c7b-bc21-ba7acb558b2e\") " pod="openstack/glance-db-create-hjp4p" Dec 05 10:40:45 crc kubenswrapper[4796]: I1205 10:40:45.122303 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hjp4p" Dec 05 10:40:45 crc kubenswrapper[4796]: I1205 10:40:45.203526 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-22f2t"] Dec 05 10:40:45 crc kubenswrapper[4796]: I1205 10:40:45.522165 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-hjp4p"] Dec 05 10:40:45 crc kubenswrapper[4796]: W1205 10:40:45.525328 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1794457_a1b4_4c7b_bc21_ba7acb558b2e.slice/crio-6e686e53f9e98cf32620d695d372665862c9599bd05d61867fc57c234ea4ca6e WatchSource:0}: Error finding container 6e686e53f9e98cf32620d695d372665862c9599bd05d61867fc57c234ea4ca6e: Status 404 returned error can't find the container with id 6e686e53f9e98cf32620d695d372665862c9599bd05d61867fc57c234ea4ca6e Dec 05 10:40:45 crc kubenswrapper[4796]: I1205 10:40:45.602418 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ff8476eb-20ea-41dc-97e0-d08619e42a30-etc-swift\") pod \"swift-storage-0\" (UID: \"ff8476eb-20ea-41dc-97e0-d08619e42a30\") " pod="openstack/swift-storage-0" Dec 05 10:40:45 crc kubenswrapper[4796]: E1205 10:40:45.602728 4796 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 05 10:40:45 crc kubenswrapper[4796]: E1205 10:40:45.602782 4796 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 05 10:40:45 crc kubenswrapper[4796]: E1205 10:40:45.602882 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff8476eb-20ea-41dc-97e0-d08619e42a30-etc-swift podName:ff8476eb-20ea-41dc-97e0-d08619e42a30 nodeName:}" failed. No retries permitted until 2025-12-05 10:40:53.602853949 +0000 UTC m=+799.890959463 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ff8476eb-20ea-41dc-97e0-d08619e42a30-etc-swift") pod "swift-storage-0" (UID: "ff8476eb-20ea-41dc-97e0-d08619e42a30") : configmap "swift-ring-files" not found Dec 05 10:40:45 crc kubenswrapper[4796]: I1205 10:40:45.612946 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hjp4p" event={"ID":"a1794457-a1b4-4c7b-bc21-ba7acb558b2e","Type":"ContainerStarted","Data":"6e686e53f9e98cf32620d695d372665862c9599bd05d61867fc57c234ea4ca6e"} Dec 05 10:40:45 crc kubenswrapper[4796]: I1205 10:40:45.614517 4796 generic.go:334] "Generic (PLEG): container finished" podID="1e2da584-c796-4527-b9db-0b455d037fec" containerID="7843a26578240072886edee3fcdf9b6fcf4e996e70b550db3cd4dc1f18eb12d7" exitCode=0 Dec 05 10:40:45 crc kubenswrapper[4796]: I1205 10:40:45.614588 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-22f2t" event={"ID":"1e2da584-c796-4527-b9db-0b455d037fec","Type":"ContainerDied","Data":"7843a26578240072886edee3fcdf9b6fcf4e996e70b550db3cd4dc1f18eb12d7"} Dec 05 10:40:45 crc kubenswrapper[4796]: I1205 10:40:45.614616 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-22f2t" event={"ID":"1e2da584-c796-4527-b9db-0b455d037fec","Type":"ContainerStarted","Data":"cb89961a25bf4b178ab3cdebde60ed6d97f12284fda1a26b15aa75a5f662d55b"} Dec 05 10:40:45 crc kubenswrapper[4796]: I1205 10:40:45.617509 4796 generic.go:334] "Generic (PLEG): container finished" podID="5a2c6f26-7d14-4cda-a3d0-901c0a9fcc80" containerID="27e25534cf6a261756afc8038a472398d7e1b4d32c56d4f45a91ec63e62e1d55" exitCode=0 Dec 05 10:40:45 crc kubenswrapper[4796]: I1205 10:40:45.617586 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6dhkt" event={"ID":"5a2c6f26-7d14-4cda-a3d0-901c0a9fcc80","Type":"ContainerDied","Data":"27e25534cf6a261756afc8038a472398d7e1b4d32c56d4f45a91ec63e62e1d55"} Dec 05 10:40:45 crc kubenswrapper[4796]: I1205 10:40:45.617647 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6dhkt" event={"ID":"5a2c6f26-7d14-4cda-a3d0-901c0a9fcc80","Type":"ContainerStarted","Data":"9077f635cd8bfd3433b36c7a5ce685a72634cbd812d243ea9dd5835cebc4279c"} Dec 05 10:40:46 crc kubenswrapper[4796]: I1205 10:40:46.625063 4796 generic.go:334] "Generic (PLEG): container finished" podID="a1794457-a1b4-4c7b-bc21-ba7acb558b2e" containerID="602ed37b7872fb46a6996860cd73bab4216149f10cbfc00441fb8a9f34fc1bf8" exitCode=0 Dec 05 10:40:46 crc kubenswrapper[4796]: I1205 10:40:46.625140 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hjp4p" event={"ID":"a1794457-a1b4-4c7b-bc21-ba7acb558b2e","Type":"ContainerDied","Data":"602ed37b7872fb46a6996860cd73bab4216149f10cbfc00441fb8a9f34fc1bf8"} Dec 05 10:40:46 crc kubenswrapper[4796]: I1205 10:40:46.799706 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" Dec 05 10:40:46 crc kubenswrapper[4796]: I1205 10:40:46.857792 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cb666b895-qsrsm"] Dec 05 10:40:46 crc kubenswrapper[4796]: I1205 10:40:46.858006 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cb666b895-qsrsm" podUID="a7b55fe6-1d8f-4945-bba8-3c9c7bff3090" containerName="dnsmasq-dns" containerID="cri-o://95c1a213f91168989baa795d702a11c376ecd769c9dd11ff9f24833e7707ea3d" gracePeriod=10 Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.033209 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6dhkt" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.042192 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-22f2t" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.126356 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx5r7\" (UniqueName: \"kubernetes.io/projected/5a2c6f26-7d14-4cda-a3d0-901c0a9fcc80-kube-api-access-hx5r7\") pod \"5a2c6f26-7d14-4cda-a3d0-901c0a9fcc80\" (UID: \"5a2c6f26-7d14-4cda-a3d0-901c0a9fcc80\") " Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.126573 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbfm4\" (UniqueName: \"kubernetes.io/projected/1e2da584-c796-4527-b9db-0b455d037fec-kube-api-access-vbfm4\") pod \"1e2da584-c796-4527-b9db-0b455d037fec\" (UID: \"1e2da584-c796-4527-b9db-0b455d037fec\") " Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.136001 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e2da584-c796-4527-b9db-0b455d037fec-kube-api-access-vbfm4" (OuterVolumeSpecName: "kube-api-access-vbfm4") pod "1e2da584-c796-4527-b9db-0b455d037fec" (UID: "1e2da584-c796-4527-b9db-0b455d037fec"). InnerVolumeSpecName "kube-api-access-vbfm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.137732 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a2c6f26-7d14-4cda-a3d0-901c0a9fcc80-kube-api-access-hx5r7" (OuterVolumeSpecName: "kube-api-access-hx5r7") pod "5a2c6f26-7d14-4cda-a3d0-901c0a9fcc80" (UID: "5a2c6f26-7d14-4cda-a3d0-901c0a9fcc80"). InnerVolumeSpecName "kube-api-access-hx5r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.228989 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbfm4\" (UniqueName: \"kubernetes.io/projected/1e2da584-c796-4527-b9db-0b455d037fec-kube-api-access-vbfm4\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.229017 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx5r7\" (UniqueName: \"kubernetes.io/projected/5a2c6f26-7d14-4cda-a3d0-901c0a9fcc80-kube-api-access-hx5r7\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.242231 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb666b895-qsrsm" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.329472 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7b55fe6-1d8f-4945-bba8-3c9c7bff3090-dns-svc\") pod \"a7b55fe6-1d8f-4945-bba8-3c9c7bff3090\" (UID: \"a7b55fe6-1d8f-4945-bba8-3c9c7bff3090\") " Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.329544 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsc7q\" (UniqueName: \"kubernetes.io/projected/a7b55fe6-1d8f-4945-bba8-3c9c7bff3090-kube-api-access-vsc7q\") pod \"a7b55fe6-1d8f-4945-bba8-3c9c7bff3090\" (UID: \"a7b55fe6-1d8f-4945-bba8-3c9c7bff3090\") " Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.329597 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7b55fe6-1d8f-4945-bba8-3c9c7bff3090-config\") pod \"a7b55fe6-1d8f-4945-bba8-3c9c7bff3090\" (UID: \"a7b55fe6-1d8f-4945-bba8-3c9c7bff3090\") " Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.333713 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7b55fe6-1d8f-4945-bba8-3c9c7bff3090-kube-api-access-vsc7q" (OuterVolumeSpecName: "kube-api-access-vsc7q") pod "a7b55fe6-1d8f-4945-bba8-3c9c7bff3090" (UID: "a7b55fe6-1d8f-4945-bba8-3c9c7bff3090"). InnerVolumeSpecName "kube-api-access-vsc7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.358458 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7b55fe6-1d8f-4945-bba8-3c9c7bff3090-config" (OuterVolumeSpecName: "config") pod "a7b55fe6-1d8f-4945-bba8-3c9c7bff3090" (UID: "a7b55fe6-1d8f-4945-bba8-3c9c7bff3090"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.361512 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7b55fe6-1d8f-4945-bba8-3c9c7bff3090-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a7b55fe6-1d8f-4945-bba8-3c9c7bff3090" (UID: "a7b55fe6-1d8f-4945-bba8-3c9c7bff3090"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.431080 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsc7q\" (UniqueName: \"kubernetes.io/projected/a7b55fe6-1d8f-4945-bba8-3c9c7bff3090-kube-api-access-vsc7q\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.431110 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7b55fe6-1d8f-4945-bba8-3c9c7bff3090-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.431119 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7b55fe6-1d8f-4945-bba8-3c9c7bff3090-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.632907 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-22f2t" event={"ID":"1e2da584-c796-4527-b9db-0b455d037fec","Type":"ContainerDied","Data":"cb89961a25bf4b178ab3cdebde60ed6d97f12284fda1a26b15aa75a5f662d55b"} Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.632927 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-22f2t" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.632946 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb89961a25bf4b178ab3cdebde60ed6d97f12284fda1a26b15aa75a5f662d55b" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.634808 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6dhkt" event={"ID":"5a2c6f26-7d14-4cda-a3d0-901c0a9fcc80","Type":"ContainerDied","Data":"9077f635cd8bfd3433b36c7a5ce685a72634cbd812d243ea9dd5835cebc4279c"} Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.634833 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9077f635cd8bfd3433b36c7a5ce685a72634cbd812d243ea9dd5835cebc4279c" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.634848 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6dhkt" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.635824 4796 generic.go:334] "Generic (PLEG): container finished" podID="5bfeb9c9-1808-4f43-b61b-4fafe36cda09" containerID="921c98048fe8fc1662210d2e9706b5b9d8dace0d02f10ba9eff8fba7e65de3f3" exitCode=0 Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.635872 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-42p7z" event={"ID":"5bfeb9c9-1808-4f43-b61b-4fafe36cda09","Type":"ContainerDied","Data":"921c98048fe8fc1662210d2e9706b5b9d8dace0d02f10ba9eff8fba7e65de3f3"} Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.637469 4796 generic.go:334] "Generic (PLEG): container finished" podID="a7b55fe6-1d8f-4945-bba8-3c9c7bff3090" containerID="95c1a213f91168989baa795d702a11c376ecd769c9dd11ff9f24833e7707ea3d" exitCode=0 Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.637494 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb666b895-qsrsm" event={"ID":"a7b55fe6-1d8f-4945-bba8-3c9c7bff3090","Type":"ContainerDied","Data":"95c1a213f91168989baa795d702a11c376ecd769c9dd11ff9f24833e7707ea3d"} Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.637524 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb666b895-qsrsm" event={"ID":"a7b55fe6-1d8f-4945-bba8-3c9c7bff3090","Type":"ContainerDied","Data":"325c93202dfd9162538259730a05b25f95072b441585a978678261e4cc13b4c4"} Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.637544 4796 scope.go:117] "RemoveContainer" containerID="95c1a213f91168989baa795d702a11c376ecd769c9dd11ff9f24833e7707ea3d" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.637715 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb666b895-qsrsm" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.659304 4796 scope.go:117] "RemoveContainer" containerID="bec329e9587bb0556071c77f514f01d12c55b1cb5a0dc9fa87ae758058139b4c" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.683995 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cb666b895-qsrsm"] Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.687391 4796 scope.go:117] "RemoveContainer" containerID="95c1a213f91168989baa795d702a11c376ecd769c9dd11ff9f24833e7707ea3d" Dec 05 10:40:47 crc kubenswrapper[4796]: E1205 10:40:47.687664 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95c1a213f91168989baa795d702a11c376ecd769c9dd11ff9f24833e7707ea3d\": container with ID starting with 95c1a213f91168989baa795d702a11c376ecd769c9dd11ff9f24833e7707ea3d not found: ID does not exist" containerID="95c1a213f91168989baa795d702a11c376ecd769c9dd11ff9f24833e7707ea3d" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.687710 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95c1a213f91168989baa795d702a11c376ecd769c9dd11ff9f24833e7707ea3d"} err="failed to get container status \"95c1a213f91168989baa795d702a11c376ecd769c9dd11ff9f24833e7707ea3d\": rpc error: code = NotFound desc = could not find container \"95c1a213f91168989baa795d702a11c376ecd769c9dd11ff9f24833e7707ea3d\": container with ID starting with 95c1a213f91168989baa795d702a11c376ecd769c9dd11ff9f24833e7707ea3d not found: ID does not exist" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.687732 4796 scope.go:117] "RemoveContainer" containerID="bec329e9587bb0556071c77f514f01d12c55b1cb5a0dc9fa87ae758058139b4c" Dec 05 10:40:47 crc kubenswrapper[4796]: E1205 10:40:47.687942 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bec329e9587bb0556071c77f514f01d12c55b1cb5a0dc9fa87ae758058139b4c\": container with ID starting with bec329e9587bb0556071c77f514f01d12c55b1cb5a0dc9fa87ae758058139b4c not found: ID does not exist" containerID="bec329e9587bb0556071c77f514f01d12c55b1cb5a0dc9fa87ae758058139b4c" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.687966 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bec329e9587bb0556071c77f514f01d12c55b1cb5a0dc9fa87ae758058139b4c"} err="failed to get container status \"bec329e9587bb0556071c77f514f01d12c55b1cb5a0dc9fa87ae758058139b4c\": rpc error: code = NotFound desc = could not find container \"bec329e9587bb0556071c77f514f01d12c55b1cb5a0dc9fa87ae758058139b4c\": container with ID starting with bec329e9587bb0556071c77f514f01d12c55b1cb5a0dc9fa87ae758058139b4c not found: ID does not exist" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.688222 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cb666b895-qsrsm"] Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.819564 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hjp4p" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.937615 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb7sx\" (UniqueName: \"kubernetes.io/projected/a1794457-a1b4-4c7b-bc21-ba7acb558b2e-kube-api-access-bb7sx\") pod \"a1794457-a1b4-4c7b-bc21-ba7acb558b2e\" (UID: \"a1794457-a1b4-4c7b-bc21-ba7acb558b2e\") " Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.945227 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1794457-a1b4-4c7b-bc21-ba7acb558b2e-kube-api-access-bb7sx" (OuterVolumeSpecName: "kube-api-access-bb7sx") pod "a1794457-a1b4-4c7b-bc21-ba7acb558b2e" (UID: "a1794457-a1b4-4c7b-bc21-ba7acb558b2e"). InnerVolumeSpecName "kube-api-access-bb7sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:40:47 crc kubenswrapper[4796]: I1205 10:40:47.950172 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb7sx\" (UniqueName: \"kubernetes.io/projected/a1794457-a1b4-4c7b-bc21-ba7acb558b2e-kube-api-access-bb7sx\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:48 crc kubenswrapper[4796]: I1205 10:40:48.038605 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7b55fe6-1d8f-4945-bba8-3c9c7bff3090" path="/var/lib/kubelet/pods/a7b55fe6-1d8f-4945-bba8-3c9c7bff3090/volumes" Dec 05 10:40:48 crc kubenswrapper[4796]: I1205 10:40:48.644422 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-hjp4p" Dec 05 10:40:48 crc kubenswrapper[4796]: I1205 10:40:48.644437 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-hjp4p" event={"ID":"a1794457-a1b4-4c7b-bc21-ba7acb558b2e","Type":"ContainerDied","Data":"6e686e53f9e98cf32620d695d372665862c9599bd05d61867fc57c234ea4ca6e"} Dec 05 10:40:48 crc kubenswrapper[4796]: I1205 10:40:48.644472 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e686e53f9e98cf32620d695d372665862c9599bd05d61867fc57c234ea4ca6e" Dec 05 10:40:48 crc kubenswrapper[4796]: I1205 10:40:48.890158 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:48 crc kubenswrapper[4796]: I1205 10:40:48.964333 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-dispersionconf\") pod \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " Dec 05 10:40:48 crc kubenswrapper[4796]: I1205 10:40:48.964374 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2m6x\" (UniqueName: \"kubernetes.io/projected/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-kube-api-access-s2m6x\") pod \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " Dec 05 10:40:48 crc kubenswrapper[4796]: I1205 10:40:48.964412 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-ring-data-devices\") pod \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " Dec 05 10:40:48 crc kubenswrapper[4796]: I1205 10:40:48.964492 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-combined-ca-bundle\") pod \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " Dec 05 10:40:48 crc kubenswrapper[4796]: I1205 10:40:48.965278 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-swiftconf\") pod \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " Dec 05 10:40:48 crc kubenswrapper[4796]: I1205 10:40:48.965359 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-etc-swift\") pod \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " Dec 05 10:40:48 crc kubenswrapper[4796]: I1205 10:40:48.965451 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5bfeb9c9-1808-4f43-b61b-4fafe36cda09" (UID: "5bfeb9c9-1808-4f43-b61b-4fafe36cda09"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:40:48 crc kubenswrapper[4796]: I1205 10:40:48.965475 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-scripts\") pod \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\" (UID: \"5bfeb9c9-1808-4f43-b61b-4fafe36cda09\") " Dec 05 10:40:48 crc kubenswrapper[4796]: I1205 10:40:48.966120 4796 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:48 crc kubenswrapper[4796]: I1205 10:40:48.966178 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5bfeb9c9-1808-4f43-b61b-4fafe36cda09" (UID: "5bfeb9c9-1808-4f43-b61b-4fafe36cda09"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:40:48 crc kubenswrapper[4796]: I1205 10:40:48.967634 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-kube-api-access-s2m6x" (OuterVolumeSpecName: "kube-api-access-s2m6x") pod "5bfeb9c9-1808-4f43-b61b-4fafe36cda09" (UID: "5bfeb9c9-1808-4f43-b61b-4fafe36cda09"). InnerVolumeSpecName "kube-api-access-s2m6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:40:48 crc kubenswrapper[4796]: I1205 10:40:48.983819 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5bfeb9c9-1808-4f43-b61b-4fafe36cda09" (UID: "5bfeb9c9-1808-4f43-b61b-4fafe36cda09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:40:48 crc kubenswrapper[4796]: I1205 10:40:48.984382 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5bfeb9c9-1808-4f43-b61b-4fafe36cda09" (UID: "5bfeb9c9-1808-4f43-b61b-4fafe36cda09"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:40:48 crc kubenswrapper[4796]: I1205 10:40:48.984603 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5bfeb9c9-1808-4f43-b61b-4fafe36cda09" (UID: "5bfeb9c9-1808-4f43-b61b-4fafe36cda09"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:40:48 crc kubenswrapper[4796]: I1205 10:40:48.986873 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-scripts" (OuterVolumeSpecName: "scripts") pod "5bfeb9c9-1808-4f43-b61b-4fafe36cda09" (UID: "5bfeb9c9-1808-4f43-b61b-4fafe36cda09"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:40:49 crc kubenswrapper[4796]: I1205 10:40:49.067431 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2m6x\" (UniqueName: \"kubernetes.io/projected/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-kube-api-access-s2m6x\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:49 crc kubenswrapper[4796]: I1205 10:40:49.067458 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:49 crc kubenswrapper[4796]: I1205 10:40:49.067468 4796 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:49 crc kubenswrapper[4796]: I1205 10:40:49.067477 4796 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:49 crc kubenswrapper[4796]: I1205 10:40:49.067487 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:49 crc kubenswrapper[4796]: I1205 10:40:49.067495 4796 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5bfeb9c9-1808-4f43-b61b-4fafe36cda09-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:49 crc kubenswrapper[4796]: I1205 10:40:49.652821 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-42p7z" event={"ID":"5bfeb9c9-1808-4f43-b61b-4fafe36cda09","Type":"ContainerDied","Data":"b6562008d81c35ee52a3e8e3b990a2355cd6df7879a65a5a20988833629f3bcf"} Dec 05 10:40:49 crc kubenswrapper[4796]: I1205 10:40:49.652858 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6562008d81c35ee52a3e8e3b990a2355cd6df7879a65a5a20988833629f3bcf" Dec 05 10:40:49 crc kubenswrapper[4796]: I1205 10:40:49.652861 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-42p7z" Dec 05 10:40:53 crc kubenswrapper[4796]: I1205 10:40:53.624728 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ff8476eb-20ea-41dc-97e0-d08619e42a30-etc-swift\") pod \"swift-storage-0\" (UID: \"ff8476eb-20ea-41dc-97e0-d08619e42a30\") " pod="openstack/swift-storage-0" Dec 05 10:40:53 crc kubenswrapper[4796]: I1205 10:40:53.630852 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ff8476eb-20ea-41dc-97e0-d08619e42a30-etc-swift\") pod \"swift-storage-0\" (UID: \"ff8476eb-20ea-41dc-97e0-d08619e42a30\") " pod="openstack/swift-storage-0" Dec 05 10:40:53 crc kubenswrapper[4796]: I1205 10:40:53.673638 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 05 10:40:54 crc kubenswrapper[4796]: W1205 10:40:54.081332 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff8476eb_20ea_41dc_97e0_d08619e42a30.slice/crio-2a1b1f93d2d09c0c72a5bbb9c6b221430285ec6a2dd2837c09a285deba8cc7d2 WatchSource:0}: Error finding container 2a1b1f93d2d09c0c72a5bbb9c6b221430285ec6a2dd2837c09a285deba8cc7d2: Status 404 returned error can't find the container with id 2a1b1f93d2d09c0c72a5bbb9c6b221430285ec6a2dd2837c09a285deba8cc7d2 Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.082308 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.114480 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.245419 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6546-account-create-rs4m9"] Dec 05 10:40:54 crc kubenswrapper[4796]: E1205 10:40:54.245906 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e2da584-c796-4527-b9db-0b455d037fec" containerName="mariadb-database-create" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.245921 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e2da584-c796-4527-b9db-0b455d037fec" containerName="mariadb-database-create" Dec 05 10:40:54 crc kubenswrapper[4796]: E1205 10:40:54.245933 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a2c6f26-7d14-4cda-a3d0-901c0a9fcc80" containerName="mariadb-database-create" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.245938 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a2c6f26-7d14-4cda-a3d0-901c0a9fcc80" containerName="mariadb-database-create" Dec 05 10:40:54 crc kubenswrapper[4796]: E1205 10:40:54.245952 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7b55fe6-1d8f-4945-bba8-3c9c7bff3090" containerName="init" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.245957 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7b55fe6-1d8f-4945-bba8-3c9c7bff3090" containerName="init" Dec 05 10:40:54 crc kubenswrapper[4796]: E1205 10:40:54.245964 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1794457-a1b4-4c7b-bc21-ba7acb558b2e" containerName="mariadb-database-create" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.245969 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1794457-a1b4-4c7b-bc21-ba7acb558b2e" containerName="mariadb-database-create" Dec 05 10:40:54 crc kubenswrapper[4796]: E1205 10:40:54.245983 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bfeb9c9-1808-4f43-b61b-4fafe36cda09" containerName="swift-ring-rebalance" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.245988 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bfeb9c9-1808-4f43-b61b-4fafe36cda09" containerName="swift-ring-rebalance" Dec 05 10:40:54 crc kubenswrapper[4796]: E1205 10:40:54.246000 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7b55fe6-1d8f-4945-bba8-3c9c7bff3090" containerName="dnsmasq-dns" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.246005 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7b55fe6-1d8f-4945-bba8-3c9c7bff3090" containerName="dnsmasq-dns" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.246125 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7b55fe6-1d8f-4945-bba8-3c9c7bff3090" containerName="dnsmasq-dns" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.246147 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1794457-a1b4-4c7b-bc21-ba7acb558b2e" containerName="mariadb-database-create" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.246167 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e2da584-c796-4527-b9db-0b455d037fec" containerName="mariadb-database-create" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.246175 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a2c6f26-7d14-4cda-a3d0-901c0a9fcc80" containerName="mariadb-database-create" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.246182 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bfeb9c9-1808-4f43-b61b-4fafe36cda09" containerName="swift-ring-rebalance" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.246623 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6546-account-create-rs4m9" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.248483 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.253065 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6546-account-create-rs4m9"] Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.338871 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mvht\" (UniqueName: \"kubernetes.io/projected/7d6a8d0e-22fa-489d-8776-0bd33787e161-kube-api-access-4mvht\") pod \"keystone-6546-account-create-rs4m9\" (UID: \"7d6a8d0e-22fa-489d-8776-0bd33787e161\") " pod="openstack/keystone-6546-account-create-rs4m9" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.440833 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mvht\" (UniqueName: \"kubernetes.io/projected/7d6a8d0e-22fa-489d-8776-0bd33787e161-kube-api-access-4mvht\") pod \"keystone-6546-account-create-rs4m9\" (UID: \"7d6a8d0e-22fa-489d-8776-0bd33787e161\") " pod="openstack/keystone-6546-account-create-rs4m9" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.454910 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mvht\" (UniqueName: \"kubernetes.io/projected/7d6a8d0e-22fa-489d-8776-0bd33787e161-kube-api-access-4mvht\") pod \"keystone-6546-account-create-rs4m9\" (UID: \"7d6a8d0e-22fa-489d-8776-0bd33787e161\") " pod="openstack/keystone-6546-account-create-rs4m9" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.500742 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8296-account-create-vsqtn"] Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.501826 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8296-account-create-vsqtn" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.503271 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.505358 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8296-account-create-vsqtn"] Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.563957 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6546-account-create-rs4m9" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.644071 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhdfv\" (UniqueName: \"kubernetes.io/projected/855342fc-53c0-408c-88e0-bfcf5f5c181c-kube-api-access-zhdfv\") pod \"placement-8296-account-create-vsqtn\" (UID: \"855342fc-53c0-408c-88e0-bfcf5f5c181c\") " pod="openstack/placement-8296-account-create-vsqtn" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.679050 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff8476eb-20ea-41dc-97e0-d08619e42a30","Type":"ContainerStarted","Data":"2a1b1f93d2d09c0c72a5bbb9c6b221430285ec6a2dd2837c09a285deba8cc7d2"} Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.745577 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhdfv\" (UniqueName: \"kubernetes.io/projected/855342fc-53c0-408c-88e0-bfcf5f5c181c-kube-api-access-zhdfv\") pod \"placement-8296-account-create-vsqtn\" (UID: \"855342fc-53c0-408c-88e0-bfcf5f5c181c\") " pod="openstack/placement-8296-account-create-vsqtn" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.760213 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhdfv\" (UniqueName: \"kubernetes.io/projected/855342fc-53c0-408c-88e0-bfcf5f5c181c-kube-api-access-zhdfv\") pod \"placement-8296-account-create-vsqtn\" (UID: \"855342fc-53c0-408c-88e0-bfcf5f5c181c\") " pod="openstack/placement-8296-account-create-vsqtn" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.817549 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8296-account-create-vsqtn" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.854842 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ebc7-account-create-t4zfm"] Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.857193 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ebc7-account-create-t4zfm" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.858534 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.860235 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ebc7-account-create-t4zfm"] Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.925791 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6546-account-create-rs4m9"] Dec 05 10:40:54 crc kubenswrapper[4796]: I1205 10:40:54.948978 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfmqc\" (UniqueName: \"kubernetes.io/projected/19c553eb-9b61-4fd2-8584-a8f9d862f59d-kube-api-access-sfmqc\") pod \"glance-ebc7-account-create-t4zfm\" (UID: \"19c553eb-9b61-4fd2-8584-a8f9d862f59d\") " pod="openstack/glance-ebc7-account-create-t4zfm" Dec 05 10:40:55 crc kubenswrapper[4796]: I1205 10:40:55.050154 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfmqc\" (UniqueName: \"kubernetes.io/projected/19c553eb-9b61-4fd2-8584-a8f9d862f59d-kube-api-access-sfmqc\") pod \"glance-ebc7-account-create-t4zfm\" (UID: \"19c553eb-9b61-4fd2-8584-a8f9d862f59d\") " pod="openstack/glance-ebc7-account-create-t4zfm" Dec 05 10:40:55 crc kubenswrapper[4796]: I1205 10:40:55.062879 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfmqc\" (UniqueName: \"kubernetes.io/projected/19c553eb-9b61-4fd2-8584-a8f9d862f59d-kube-api-access-sfmqc\") pod \"glance-ebc7-account-create-t4zfm\" (UID: \"19c553eb-9b61-4fd2-8584-a8f9d862f59d\") " pod="openstack/glance-ebc7-account-create-t4zfm" Dec 05 10:40:55 crc kubenswrapper[4796]: I1205 10:40:55.189130 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8296-account-create-vsqtn"] Dec 05 10:40:55 crc kubenswrapper[4796]: I1205 10:40:55.203895 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ebc7-account-create-t4zfm" Dec 05 10:40:55 crc kubenswrapper[4796]: W1205 10:40:55.231315 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod855342fc_53c0_408c_88e0_bfcf5f5c181c.slice/crio-21d86a0ba4ce361272e7ace866ea18883ccba6427edea911234fe6826bc3c932 WatchSource:0}: Error finding container 21d86a0ba4ce361272e7ace866ea18883ccba6427edea911234fe6826bc3c932: Status 404 returned error can't find the container with id 21d86a0ba4ce361272e7ace866ea18883ccba6427edea911234fe6826bc3c932 Dec 05 10:40:55 crc kubenswrapper[4796]: I1205 10:40:55.590468 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ebc7-account-create-t4zfm"] Dec 05 10:40:55 crc kubenswrapper[4796]: W1205 10:40:55.594290 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19c553eb_9b61_4fd2_8584_a8f9d862f59d.slice/crio-f9e639087cdd9701881656adcd69ddeeb6b26fda89d3726e3108854d6a0c47f8 WatchSource:0}: Error finding container f9e639087cdd9701881656adcd69ddeeb6b26fda89d3726e3108854d6a0c47f8: Status 404 returned error can't find the container with id f9e639087cdd9701881656adcd69ddeeb6b26fda89d3726e3108854d6a0c47f8 Dec 05 10:40:55 crc kubenswrapper[4796]: I1205 10:40:55.687444 4796 generic.go:334] "Generic (PLEG): container finished" podID="855342fc-53c0-408c-88e0-bfcf5f5c181c" containerID="d7bc89b859b2c18228ac1400eb6f5024d0eea0390bb2b102bfbef3bdd2418683" exitCode=0 Dec 05 10:40:55 crc kubenswrapper[4796]: I1205 10:40:55.687535 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8296-account-create-vsqtn" event={"ID":"855342fc-53c0-408c-88e0-bfcf5f5c181c","Type":"ContainerDied","Data":"d7bc89b859b2c18228ac1400eb6f5024d0eea0390bb2b102bfbef3bdd2418683"} Dec 05 10:40:55 crc kubenswrapper[4796]: I1205 10:40:55.687739 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8296-account-create-vsqtn" event={"ID":"855342fc-53c0-408c-88e0-bfcf5f5c181c","Type":"ContainerStarted","Data":"21d86a0ba4ce361272e7ace866ea18883ccba6427edea911234fe6826bc3c932"} Dec 05 10:40:55 crc kubenswrapper[4796]: I1205 10:40:55.690857 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff8476eb-20ea-41dc-97e0-d08619e42a30","Type":"ContainerStarted","Data":"d78429df02de7b9bd9f20145d744b28fb9f1b8f94a9dc7ac8e61ce1b5f1863e9"} Dec 05 10:40:55 crc kubenswrapper[4796]: I1205 10:40:55.690918 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff8476eb-20ea-41dc-97e0-d08619e42a30","Type":"ContainerStarted","Data":"f850003546ad1698c17576251536f87bc607aa213c799bf313945e531645133b"} Dec 05 10:40:55 crc kubenswrapper[4796]: I1205 10:40:55.690931 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff8476eb-20ea-41dc-97e0-d08619e42a30","Type":"ContainerStarted","Data":"04fb817e28b4863fed5edab113317bedbc75569a5146075ebde205e36784c26f"} Dec 05 10:40:55 crc kubenswrapper[4796]: I1205 10:40:55.695244 4796 generic.go:334] "Generic (PLEG): container finished" podID="7d6a8d0e-22fa-489d-8776-0bd33787e161" containerID="007acfe157f50bf1f983628a254e4792c914b5d9977f8e5a1583309e7ef53f05" exitCode=0 Dec 05 10:40:55 crc kubenswrapper[4796]: I1205 10:40:55.695302 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6546-account-create-rs4m9" event={"ID":"7d6a8d0e-22fa-489d-8776-0bd33787e161","Type":"ContainerDied","Data":"007acfe157f50bf1f983628a254e4792c914b5d9977f8e5a1583309e7ef53f05"} Dec 05 10:40:55 crc kubenswrapper[4796]: I1205 10:40:55.695322 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6546-account-create-rs4m9" event={"ID":"7d6a8d0e-22fa-489d-8776-0bd33787e161","Type":"ContainerStarted","Data":"b79626aca251d7dd17328e3dfad09c73980546c69ccdfe2901e3a50dbf06ff81"} Dec 05 10:40:55 crc kubenswrapper[4796]: I1205 10:40:55.697308 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ebc7-account-create-t4zfm" event={"ID":"19c553eb-9b61-4fd2-8584-a8f9d862f59d","Type":"ContainerStarted","Data":"f9e639087cdd9701881656adcd69ddeeb6b26fda89d3726e3108854d6a0c47f8"} Dec 05 10:40:56 crc kubenswrapper[4796]: I1205 10:40:56.704311 4796 generic.go:334] "Generic (PLEG): container finished" podID="19c553eb-9b61-4fd2-8584-a8f9d862f59d" containerID="5a7496a5607e1453f37a1884bab360790721361124a9ae8930fa77a0d6fa6e2f" exitCode=0 Dec 05 10:40:56 crc kubenswrapper[4796]: I1205 10:40:56.704350 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ebc7-account-create-t4zfm" event={"ID":"19c553eb-9b61-4fd2-8584-a8f9d862f59d","Type":"ContainerDied","Data":"5a7496a5607e1453f37a1884bab360790721361124a9ae8930fa77a0d6fa6e2f"} Dec 05 10:40:56 crc kubenswrapper[4796]: I1205 10:40:56.706528 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff8476eb-20ea-41dc-97e0-d08619e42a30","Type":"ContainerStarted","Data":"acf2a344e1ecd3eb6435440827f344a85b028f7578e3f0e16f587911cef4d21c"} Dec 05 10:40:57 crc kubenswrapper[4796]: I1205 10:40:57.063989 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6546-account-create-rs4m9" Dec 05 10:40:57 crc kubenswrapper[4796]: I1205 10:40:57.068370 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8296-account-create-vsqtn" Dec 05 10:40:57 crc kubenswrapper[4796]: I1205 10:40:57.178590 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mvht\" (UniqueName: \"kubernetes.io/projected/7d6a8d0e-22fa-489d-8776-0bd33787e161-kube-api-access-4mvht\") pod \"7d6a8d0e-22fa-489d-8776-0bd33787e161\" (UID: \"7d6a8d0e-22fa-489d-8776-0bd33787e161\") " Dec 05 10:40:57 crc kubenswrapper[4796]: I1205 10:40:57.178648 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhdfv\" (UniqueName: \"kubernetes.io/projected/855342fc-53c0-408c-88e0-bfcf5f5c181c-kube-api-access-zhdfv\") pod \"855342fc-53c0-408c-88e0-bfcf5f5c181c\" (UID: \"855342fc-53c0-408c-88e0-bfcf5f5c181c\") " Dec 05 10:40:57 crc kubenswrapper[4796]: I1205 10:40:57.183465 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d6a8d0e-22fa-489d-8776-0bd33787e161-kube-api-access-4mvht" (OuterVolumeSpecName: "kube-api-access-4mvht") pod "7d6a8d0e-22fa-489d-8776-0bd33787e161" (UID: "7d6a8d0e-22fa-489d-8776-0bd33787e161"). InnerVolumeSpecName "kube-api-access-4mvht". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:40:57 crc kubenswrapper[4796]: I1205 10:40:57.183535 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/855342fc-53c0-408c-88e0-bfcf5f5c181c-kube-api-access-zhdfv" (OuterVolumeSpecName: "kube-api-access-zhdfv") pod "855342fc-53c0-408c-88e0-bfcf5f5c181c" (UID: "855342fc-53c0-408c-88e0-bfcf5f5c181c"). InnerVolumeSpecName "kube-api-access-zhdfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:40:57 crc kubenswrapper[4796]: I1205 10:40:57.281295 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhdfv\" (UniqueName: \"kubernetes.io/projected/855342fc-53c0-408c-88e0-bfcf5f5c181c-kube-api-access-zhdfv\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:57 crc kubenswrapper[4796]: I1205 10:40:57.281511 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mvht\" (UniqueName: \"kubernetes.io/projected/7d6a8d0e-22fa-489d-8776-0bd33787e161-kube-api-access-4mvht\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:57 crc kubenswrapper[4796]: I1205 10:40:57.713126 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6546-account-create-rs4m9" event={"ID":"7d6a8d0e-22fa-489d-8776-0bd33787e161","Type":"ContainerDied","Data":"b79626aca251d7dd17328e3dfad09c73980546c69ccdfe2901e3a50dbf06ff81"} Dec 05 10:40:57 crc kubenswrapper[4796]: I1205 10:40:57.713160 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6546-account-create-rs4m9" Dec 05 10:40:57 crc kubenswrapper[4796]: I1205 10:40:57.713172 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b79626aca251d7dd17328e3dfad09c73980546c69ccdfe2901e3a50dbf06ff81" Dec 05 10:40:57 crc kubenswrapper[4796]: I1205 10:40:57.714576 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8296-account-create-vsqtn" Dec 05 10:40:57 crc kubenswrapper[4796]: I1205 10:40:57.714589 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8296-account-create-vsqtn" event={"ID":"855342fc-53c0-408c-88e0-bfcf5f5c181c","Type":"ContainerDied","Data":"21d86a0ba4ce361272e7ace866ea18883ccba6427edea911234fe6826bc3c932"} Dec 05 10:40:57 crc kubenswrapper[4796]: I1205 10:40:57.714619 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21d86a0ba4ce361272e7ace866ea18883ccba6427edea911234fe6826bc3c932" Dec 05 10:40:57 crc kubenswrapper[4796]: I1205 10:40:57.717471 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff8476eb-20ea-41dc-97e0-d08619e42a30","Type":"ContainerStarted","Data":"4ae46f658bee71e0838336e18abf486891b21d2e7fe981ae3a559833180cc75a"} Dec 05 10:40:57 crc kubenswrapper[4796]: I1205 10:40:57.717498 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff8476eb-20ea-41dc-97e0-d08619e42a30","Type":"ContainerStarted","Data":"b0d50445991eb6872ad8a361a52f5d1df1e7dd904e0a2b6b44dfbd51c751ed44"} Dec 05 10:40:57 crc kubenswrapper[4796]: I1205 10:40:57.717508 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff8476eb-20ea-41dc-97e0-d08619e42a30","Type":"ContainerStarted","Data":"dea278655f98757fb9b883e5883eb75901f36963aa414fe133eb667c1942ea07"} Dec 05 10:40:57 crc kubenswrapper[4796]: I1205 10:40:57.717517 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff8476eb-20ea-41dc-97e0-d08619e42a30","Type":"ContainerStarted","Data":"2fdead760d2dcf6a0671f676bed3919a4555ba1766b194071d5267afef57cec6"} Dec 05 10:40:57 crc kubenswrapper[4796]: I1205 10:40:57.905701 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ebc7-account-create-t4zfm" Dec 05 10:40:57 crc kubenswrapper[4796]: I1205 10:40:57.991240 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfmqc\" (UniqueName: \"kubernetes.io/projected/19c553eb-9b61-4fd2-8584-a8f9d862f59d-kube-api-access-sfmqc\") pod \"19c553eb-9b61-4fd2-8584-a8f9d862f59d\" (UID: \"19c553eb-9b61-4fd2-8584-a8f9d862f59d\") " Dec 05 10:40:57 crc kubenswrapper[4796]: I1205 10:40:57.994611 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19c553eb-9b61-4fd2-8584-a8f9d862f59d-kube-api-access-sfmqc" (OuterVolumeSpecName: "kube-api-access-sfmqc") pod "19c553eb-9b61-4fd2-8584-a8f9d862f59d" (UID: "19c553eb-9b61-4fd2-8584-a8f9d862f59d"). InnerVolumeSpecName "kube-api-access-sfmqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:40:58 crc kubenswrapper[4796]: I1205 10:40:58.093103 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfmqc\" (UniqueName: \"kubernetes.io/projected/19c553eb-9b61-4fd2-8584-a8f9d862f59d-kube-api-access-sfmqc\") on node \"crc\" DevicePath \"\"" Dec 05 10:40:58 crc kubenswrapper[4796]: I1205 10:40:58.726764 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ebc7-account-create-t4zfm" event={"ID":"19c553eb-9b61-4fd2-8584-a8f9d862f59d","Type":"ContainerDied","Data":"f9e639087cdd9701881656adcd69ddeeb6b26fda89d3726e3108854d6a0c47f8"} Dec 05 10:40:58 crc kubenswrapper[4796]: I1205 10:40:58.726800 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9e639087cdd9701881656adcd69ddeeb6b26fda89d3726e3108854d6a0c47f8" Dec 05 10:40:58 crc kubenswrapper[4796]: I1205 10:40:58.726771 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ebc7-account-create-t4zfm" Dec 05 10:40:58 crc kubenswrapper[4796]: I1205 10:40:58.730344 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff8476eb-20ea-41dc-97e0-d08619e42a30","Type":"ContainerStarted","Data":"dcb8cb548c55dad50a3ffb2fe07038052214892443df24d8cb306e08420ba3fa"} Dec 05 10:40:58 crc kubenswrapper[4796]: I1205 10:40:58.730368 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff8476eb-20ea-41dc-97e0-d08619e42a30","Type":"ContainerStarted","Data":"4ddbb1ba6ed9df61f8f3e0a04fdf8a413827cb0fe4d48b3a5a03d1b373f833a9"} Dec 05 10:40:59 crc kubenswrapper[4796]: I1205 10:40:59.739453 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff8476eb-20ea-41dc-97e0-d08619e42a30","Type":"ContainerStarted","Data":"cf9c3665084493243422dea6f74580d9914a383d493c1d690eee9cc50547c53a"} Dec 05 10:40:59 crc kubenswrapper[4796]: I1205 10:40:59.739714 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff8476eb-20ea-41dc-97e0-d08619e42a30","Type":"ContainerStarted","Data":"45d5a3b362c368db62aebc1a31ff15d3f3d8355df7d223eda86995f9adbde84b"} Dec 05 10:40:59 crc kubenswrapper[4796]: I1205 10:40:59.739725 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff8476eb-20ea-41dc-97e0-d08619e42a30","Type":"ContainerStarted","Data":"4263091aa3b4e6343f18ceb5cea63e91bd21e8703d5a821e3e0600252e4c778e"} Dec 05 10:40:59 crc kubenswrapper[4796]: I1205 10:40:59.739733 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff8476eb-20ea-41dc-97e0-d08619e42a30","Type":"ContainerStarted","Data":"a123ecb45d545375ee87af7f69a1a73199f9619e2fdf7d66c59e4976e7fced19"} Dec 05 10:40:59 crc kubenswrapper[4796]: I1205 10:40:59.739743 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ff8476eb-20ea-41dc-97e0-d08619e42a30","Type":"ContainerStarted","Data":"426d47b2dda6710f9517241b401d08198af914ef17ff5d1732aa094d01d61ae6"} Dec 05 10:40:59 crc kubenswrapper[4796]: I1205 10:40:59.765962 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=19.44681379 podStartE2EDuration="23.765948689s" podCreationTimestamp="2025-12-05 10:40:36 +0000 UTC" firstStartedPulling="2025-12-05 10:40:54.083045326 +0000 UTC m=+800.371150839" lastFinishedPulling="2025-12-05 10:40:58.402180224 +0000 UTC m=+804.690285738" observedRunningTime="2025-12-05 10:40:59.761323568 +0000 UTC m=+806.049429091" watchObservedRunningTime="2025-12-05 10:40:59.765948689 +0000 UTC m=+806.054054202" Dec 05 10:40:59 crc kubenswrapper[4796]: I1205 10:40:59.910667 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-8n89d"] Dec 05 10:40:59 crc kubenswrapper[4796]: E1205 10:40:59.910960 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19c553eb-9b61-4fd2-8584-a8f9d862f59d" containerName="mariadb-account-create" Dec 05 10:40:59 crc kubenswrapper[4796]: I1205 10:40:59.910978 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="19c553eb-9b61-4fd2-8584-a8f9d862f59d" containerName="mariadb-account-create" Dec 05 10:40:59 crc kubenswrapper[4796]: E1205 10:40:59.911006 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6a8d0e-22fa-489d-8776-0bd33787e161" containerName="mariadb-account-create" Dec 05 10:40:59 crc kubenswrapper[4796]: I1205 10:40:59.911011 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6a8d0e-22fa-489d-8776-0bd33787e161" containerName="mariadb-account-create" Dec 05 10:40:59 crc kubenswrapper[4796]: E1205 10:40:59.911020 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="855342fc-53c0-408c-88e0-bfcf5f5c181c" containerName="mariadb-account-create" Dec 05 10:40:59 crc kubenswrapper[4796]: I1205 10:40:59.911026 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="855342fc-53c0-408c-88e0-bfcf5f5c181c" containerName="mariadb-account-create" Dec 05 10:40:59 crc kubenswrapper[4796]: I1205 10:40:59.911191 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="855342fc-53c0-408c-88e0-bfcf5f5c181c" containerName="mariadb-account-create" Dec 05 10:40:59 crc kubenswrapper[4796]: I1205 10:40:59.911215 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6a8d0e-22fa-489d-8776-0bd33787e161" containerName="mariadb-account-create" Dec 05 10:40:59 crc kubenswrapper[4796]: I1205 10:40:59.911232 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="19c553eb-9b61-4fd2-8584-a8f9d862f59d" containerName="mariadb-account-create" Dec 05 10:40:59 crc kubenswrapper[4796]: I1205 10:40:59.911678 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8n89d" Dec 05 10:40:59 crc kubenswrapper[4796]: I1205 10:40:59.913308 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-rlc4g" Dec 05 10:40:59 crc kubenswrapper[4796]: I1205 10:40:59.913330 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 05 10:40:59 crc kubenswrapper[4796]: I1205 10:40:59.919298 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8n89d"] Dec 05 10:40:59 crc kubenswrapper[4796]: I1205 10:40:59.969925 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-864b648dc7-zjf7j"] Dec 05 10:40:59 crc kubenswrapper[4796]: I1205 10:40:59.971223 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:40:59 crc kubenswrapper[4796]: I1205 10:40:59.972584 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 05 10:40:59 crc kubenswrapper[4796]: I1205 10:40:59.981162 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864b648dc7-zjf7j"] Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.020971 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59da3269-88ea-4095-b841-cf1b27cb4274-config-data\") pod \"glance-db-sync-8n89d\" (UID: \"59da3269-88ea-4095-b841-cf1b27cb4274\") " pod="openstack/glance-db-sync-8n89d" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.021013 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59da3269-88ea-4095-b841-cf1b27cb4274-combined-ca-bundle\") pod \"glance-db-sync-8n89d\" (UID: \"59da3269-88ea-4095-b841-cf1b27cb4274\") " pod="openstack/glance-db-sync-8n89d" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.021086 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4985p\" (UniqueName: \"kubernetes.io/projected/59da3269-88ea-4095-b841-cf1b27cb4274-kube-api-access-4985p\") pod \"glance-db-sync-8n89d\" (UID: \"59da3269-88ea-4095-b841-cf1b27cb4274\") " pod="openstack/glance-db-sync-8n89d" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.021109 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/59da3269-88ea-4095-b841-cf1b27cb4274-db-sync-config-data\") pod \"glance-db-sync-8n89d\" (UID: \"59da3269-88ea-4095-b841-cf1b27cb4274\") " pod="openstack/glance-db-sync-8n89d" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.133812 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59da3269-88ea-4095-b841-cf1b27cb4274-combined-ca-bundle\") pod \"glance-db-sync-8n89d\" (UID: \"59da3269-88ea-4095-b841-cf1b27cb4274\") " pod="openstack/glance-db-sync-8n89d" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.133860 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fd29\" (UniqueName: \"kubernetes.io/projected/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-kube-api-access-9fd29\") pod \"dnsmasq-dns-864b648dc7-zjf7j\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.134068 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-ovsdbserver-nb\") pod \"dnsmasq-dns-864b648dc7-zjf7j\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.134176 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4985p\" (UniqueName: \"kubernetes.io/projected/59da3269-88ea-4095-b841-cf1b27cb4274-kube-api-access-4985p\") pod \"glance-db-sync-8n89d\" (UID: \"59da3269-88ea-4095-b841-cf1b27cb4274\") " pod="openstack/glance-db-sync-8n89d" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.134201 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/59da3269-88ea-4095-b841-cf1b27cb4274-db-sync-config-data\") pod \"glance-db-sync-8n89d\" (UID: \"59da3269-88ea-4095-b841-cf1b27cb4274\") " pod="openstack/glance-db-sync-8n89d" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.134250 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-dns-svc\") pod \"dnsmasq-dns-864b648dc7-zjf7j\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.134291 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-ovsdbserver-sb\") pod \"dnsmasq-dns-864b648dc7-zjf7j\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.134449 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-config\") pod \"dnsmasq-dns-864b648dc7-zjf7j\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.134512 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-dns-swift-storage-0\") pod \"dnsmasq-dns-864b648dc7-zjf7j\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.134659 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59da3269-88ea-4095-b841-cf1b27cb4274-config-data\") pod \"glance-db-sync-8n89d\" (UID: \"59da3269-88ea-4095-b841-cf1b27cb4274\") " pod="openstack/glance-db-sync-8n89d" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.139249 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/59da3269-88ea-4095-b841-cf1b27cb4274-db-sync-config-data\") pod \"glance-db-sync-8n89d\" (UID: \"59da3269-88ea-4095-b841-cf1b27cb4274\") " pod="openstack/glance-db-sync-8n89d" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.139274 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59da3269-88ea-4095-b841-cf1b27cb4274-config-data\") pod \"glance-db-sync-8n89d\" (UID: \"59da3269-88ea-4095-b841-cf1b27cb4274\") " pod="openstack/glance-db-sync-8n89d" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.139667 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59da3269-88ea-4095-b841-cf1b27cb4274-combined-ca-bundle\") pod \"glance-db-sync-8n89d\" (UID: \"59da3269-88ea-4095-b841-cf1b27cb4274\") " pod="openstack/glance-db-sync-8n89d" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.146991 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4985p\" (UniqueName: \"kubernetes.io/projected/59da3269-88ea-4095-b841-cf1b27cb4274-kube-api-access-4985p\") pod \"glance-db-sync-8n89d\" (UID: \"59da3269-88ea-4095-b841-cf1b27cb4274\") " pod="openstack/glance-db-sync-8n89d" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.225108 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8n89d" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.235678 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-dns-svc\") pod \"dnsmasq-dns-864b648dc7-zjf7j\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.236019 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-ovsdbserver-sb\") pod \"dnsmasq-dns-864b648dc7-zjf7j\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.236122 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-config\") pod \"dnsmasq-dns-864b648dc7-zjf7j\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.236278 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-dns-swift-storage-0\") pod \"dnsmasq-dns-864b648dc7-zjf7j\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.236399 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fd29\" (UniqueName: \"kubernetes.io/projected/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-kube-api-access-9fd29\") pod \"dnsmasq-dns-864b648dc7-zjf7j\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.237020 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-ovsdbserver-nb\") pod \"dnsmasq-dns-864b648dc7-zjf7j\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.236854 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-ovsdbserver-sb\") pod \"dnsmasq-dns-864b648dc7-zjf7j\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.236988 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-config\") pod \"dnsmasq-dns-864b648dc7-zjf7j\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.236554 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-dns-svc\") pod \"dnsmasq-dns-864b648dc7-zjf7j\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.237123 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-dns-swift-storage-0\") pod \"dnsmasq-dns-864b648dc7-zjf7j\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.237599 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-ovsdbserver-nb\") pod \"dnsmasq-dns-864b648dc7-zjf7j\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.251370 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fd29\" (UniqueName: \"kubernetes.io/projected/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-kube-api-access-9fd29\") pod \"dnsmasq-dns-864b648dc7-zjf7j\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.317853 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.442254 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-vb5l5" podUID="2f5f2848-5f90-4a9f-a6f0-b6e83b586402" containerName="ovn-controller" probeResult="failure" output=< Dec 05 10:41:00 crc kubenswrapper[4796]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 05 10:41:00 crc kubenswrapper[4796]: > Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.704851 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8n89d"] Dec 05 10:41:00 crc kubenswrapper[4796]: W1205 10:41:00.707743 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59da3269_88ea_4095_b841_cf1b27cb4274.slice/crio-f8270895446b2189ea25c1699f64d60345edf40f9347405bbb419f135318a13e WatchSource:0}: Error finding container f8270895446b2189ea25c1699f64d60345edf40f9347405bbb419f135318a13e: Status 404 returned error can't find the container with id f8270895446b2189ea25c1699f64d60345edf40f9347405bbb419f135318a13e Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.709765 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.745562 4796 generic.go:334] "Generic (PLEG): container finished" podID="f735325f-6e38-45a2-a5bd-9ad19c40b36f" containerID="90d497911455b9ebcc3523bfde3fcf72dc248a8643992a0cc697cee2006b0b31" exitCode=0 Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.745621 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f735325f-6e38-45a2-a5bd-9ad19c40b36f","Type":"ContainerDied","Data":"90d497911455b9ebcc3523bfde3fcf72dc248a8643992a0cc697cee2006b0b31"} Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.746650 4796 generic.go:334] "Generic (PLEG): container finished" podID="1a61d456-9eea-447f-b576-77473222d108" containerID="41edc440f58fb63cf7fd571cf2e1576f6d345add64c3aa95d2ab5e12423231cf" exitCode=0 Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.746726 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1a61d456-9eea-447f-b576-77473222d108","Type":"ContainerDied","Data":"41edc440f58fb63cf7fd571cf2e1576f6d345add64c3aa95d2ab5e12423231cf"} Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.747571 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8n89d" event={"ID":"59da3269-88ea-4095-b841-cf1b27cb4274","Type":"ContainerStarted","Data":"f8270895446b2189ea25c1699f64d60345edf40f9347405bbb419f135318a13e"} Dec 05 10:41:00 crc kubenswrapper[4796]: W1205 10:41:00.756882 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod885ab35e_9f3a_43b2_8c03_5d5d9f6af4b5.slice/crio-5c731b1214e37a7703e573cf49c3af597016a8b40ce923f04ae266fb76f540c3 WatchSource:0}: Error finding container 5c731b1214e37a7703e573cf49c3af597016a8b40ce923f04ae266fb76f540c3: Status 404 returned error can't find the container with id 5c731b1214e37a7703e573cf49c3af597016a8b40ce923f04ae266fb76f540c3 Dec 05 10:41:00 crc kubenswrapper[4796]: I1205 10:41:00.757739 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864b648dc7-zjf7j"] Dec 05 10:41:01 crc kubenswrapper[4796]: I1205 10:41:01.757291 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1a61d456-9eea-447f-b576-77473222d108","Type":"ContainerStarted","Data":"02fa5ac098b11c09d932690d9a4be31f5d6f43533d9db4d59a743254bce11892"} Dec 05 10:41:01 crc kubenswrapper[4796]: I1205 10:41:01.758131 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 05 10:41:01 crc kubenswrapper[4796]: I1205 10:41:01.759381 4796 generic.go:334] "Generic (PLEG): container finished" podID="885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5" containerID="8d35e8478ce2098bacd764825f62c538e503f00c0a23634a97440aa077ebdf05" exitCode=0 Dec 05 10:41:01 crc kubenswrapper[4796]: I1205 10:41:01.759422 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" event={"ID":"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5","Type":"ContainerDied","Data":"8d35e8478ce2098bacd764825f62c538e503f00c0a23634a97440aa077ebdf05"} Dec 05 10:41:01 crc kubenswrapper[4796]: I1205 10:41:01.759437 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" event={"ID":"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5","Type":"ContainerStarted","Data":"5c731b1214e37a7703e573cf49c3af597016a8b40ce923f04ae266fb76f540c3"} Dec 05 10:41:01 crc kubenswrapper[4796]: I1205 10:41:01.768479 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f735325f-6e38-45a2-a5bd-9ad19c40b36f","Type":"ContainerStarted","Data":"50196089115ad967e95a3af23289a733957ed5dc09de5e8bb3c60e6f3d6ed797"} Dec 05 10:41:01 crc kubenswrapper[4796]: I1205 10:41:01.769116 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:41:01 crc kubenswrapper[4796]: I1205 10:41:01.782517 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=45.921208733 podStartE2EDuration="52.782503281s" podCreationTimestamp="2025-12-05 10:40:09 +0000 UTC" firstStartedPulling="2025-12-05 10:40:20.129207708 +0000 UTC m=+766.417313221" lastFinishedPulling="2025-12-05 10:40:26.990502256 +0000 UTC m=+773.278607769" observedRunningTime="2025-12-05 10:41:01.781328401 +0000 UTC m=+808.069433914" watchObservedRunningTime="2025-12-05 10:41:01.782503281 +0000 UTC m=+808.070608794" Dec 05 10:41:01 crc kubenswrapper[4796]: I1205 10:41:01.808793 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=44.614963148 podStartE2EDuration="51.808781499s" podCreationTimestamp="2025-12-05 10:40:10 +0000 UTC" firstStartedPulling="2025-12-05 10:40:20.01706616 +0000 UTC m=+766.305171674" lastFinishedPulling="2025-12-05 10:40:27.210884512 +0000 UTC m=+773.498990025" observedRunningTime="2025-12-05 10:41:01.803060978 +0000 UTC m=+808.091166490" watchObservedRunningTime="2025-12-05 10:41:01.808781499 +0000 UTC m=+808.096887002" Dec 05 10:41:02 crc kubenswrapper[4796]: I1205 10:41:02.776835 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" event={"ID":"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5","Type":"ContainerStarted","Data":"048621eedacee387ce4bc4e7a8d288b150a894c61773fd4e86218171887be854"} Dec 05 10:41:02 crc kubenswrapper[4796]: I1205 10:41:02.795068 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" podStartSLOduration=3.795055395 podStartE2EDuration="3.795055395s" podCreationTimestamp="2025-12-05 10:40:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:41:02.791514433 +0000 UTC m=+809.079619966" watchObservedRunningTime="2025-12-05 10:41:02.795055395 +0000 UTC m=+809.083160908" Dec 05 10:41:03 crc kubenswrapper[4796]: I1205 10:41:03.784085 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.352811 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-vb5l5" podUID="2f5f2848-5f90-4a9f-a6f0-b6e83b586402" containerName="ovn-controller" probeResult="failure" output=< Dec 05 10:41:05 crc kubenswrapper[4796]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 05 10:41:05 crc kubenswrapper[4796]: > Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.362970 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.366423 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-z7p8z" Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.552445 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vb5l5-config-jvwzx"] Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.553393 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vb5l5-config-jvwzx" Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.555574 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.558958 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vb5l5-config-jvwzx"] Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.621223 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-additional-scripts\") pod \"ovn-controller-vb5l5-config-jvwzx\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " pod="openstack/ovn-controller-vb5l5-config-jvwzx" Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.621274 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-scripts\") pod \"ovn-controller-vb5l5-config-jvwzx\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " pod="openstack/ovn-controller-vb5l5-config-jvwzx" Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.621294 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-var-log-ovn\") pod \"ovn-controller-vb5l5-config-jvwzx\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " pod="openstack/ovn-controller-vb5l5-config-jvwzx" Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.621442 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-var-run-ovn\") pod \"ovn-controller-vb5l5-config-jvwzx\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " pod="openstack/ovn-controller-vb5l5-config-jvwzx" Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.621546 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-var-run\") pod \"ovn-controller-vb5l5-config-jvwzx\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " pod="openstack/ovn-controller-vb5l5-config-jvwzx" Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.621574 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w85rt\" (UniqueName: \"kubernetes.io/projected/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-kube-api-access-w85rt\") pod \"ovn-controller-vb5l5-config-jvwzx\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " pod="openstack/ovn-controller-vb5l5-config-jvwzx" Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.722496 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-additional-scripts\") pod \"ovn-controller-vb5l5-config-jvwzx\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " pod="openstack/ovn-controller-vb5l5-config-jvwzx" Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.722845 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-scripts\") pod \"ovn-controller-vb5l5-config-jvwzx\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " pod="openstack/ovn-controller-vb5l5-config-jvwzx" Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.722867 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-var-log-ovn\") pod \"ovn-controller-vb5l5-config-jvwzx\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " pod="openstack/ovn-controller-vb5l5-config-jvwzx" Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.722905 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-var-run-ovn\") pod \"ovn-controller-vb5l5-config-jvwzx\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " pod="openstack/ovn-controller-vb5l5-config-jvwzx" Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.722939 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-var-run\") pod \"ovn-controller-vb5l5-config-jvwzx\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " pod="openstack/ovn-controller-vb5l5-config-jvwzx" Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.722961 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w85rt\" (UniqueName: \"kubernetes.io/projected/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-kube-api-access-w85rt\") pod \"ovn-controller-vb5l5-config-jvwzx\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " pod="openstack/ovn-controller-vb5l5-config-jvwzx" Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.723168 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-additional-scripts\") pod \"ovn-controller-vb5l5-config-jvwzx\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " pod="openstack/ovn-controller-vb5l5-config-jvwzx" Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.723197 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-var-run-ovn\") pod \"ovn-controller-vb5l5-config-jvwzx\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " pod="openstack/ovn-controller-vb5l5-config-jvwzx" Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.723207 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-var-log-ovn\") pod \"ovn-controller-vb5l5-config-jvwzx\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " pod="openstack/ovn-controller-vb5l5-config-jvwzx" Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.723250 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-var-run\") pod \"ovn-controller-vb5l5-config-jvwzx\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " pod="openstack/ovn-controller-vb5l5-config-jvwzx" Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.724738 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-scripts\") pod \"ovn-controller-vb5l5-config-jvwzx\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " pod="openstack/ovn-controller-vb5l5-config-jvwzx" Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.748206 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w85rt\" (UniqueName: \"kubernetes.io/projected/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-kube-api-access-w85rt\") pod \"ovn-controller-vb5l5-config-jvwzx\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " pod="openstack/ovn-controller-vb5l5-config-jvwzx" Dec 05 10:41:05 crc kubenswrapper[4796]: I1205 10:41:05.866979 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vb5l5-config-jvwzx" Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.042654 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vb5l5-config-jvwzx"] Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.319560 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.359929 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59d5fbdd8c-tgjp2"] Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.369047 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" podUID="6eeb640e-fd19-4c09-b322-1aed1fa21fcc" containerName="dnsmasq-dns" containerID="cri-o://48077c01c2702955978f8bdaa91830daaa7b76f5152ad6b6e90a7d204e8413e7" gracePeriod=10 Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.381430 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-vb5l5" Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.679266 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.789466 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-dns-svc\") pod \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\" (UID: \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\") " Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.789866 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7qbr\" (UniqueName: \"kubernetes.io/projected/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-kube-api-access-n7qbr\") pod \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\" (UID: \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\") " Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.789902 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-ovsdbserver-nb\") pod \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\" (UID: \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\") " Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.790039 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-config\") pod \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\" (UID: \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\") " Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.790062 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-ovsdbserver-sb\") pod \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\" (UID: \"6eeb640e-fd19-4c09-b322-1aed1fa21fcc\") " Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.793982 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-kube-api-access-n7qbr" (OuterVolumeSpecName: "kube-api-access-n7qbr") pod "6eeb640e-fd19-4c09-b322-1aed1fa21fcc" (UID: "6eeb640e-fd19-4c09-b322-1aed1fa21fcc"). InnerVolumeSpecName "kube-api-access-n7qbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.821430 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6eeb640e-fd19-4c09-b322-1aed1fa21fcc" (UID: "6eeb640e-fd19-4c09-b322-1aed1fa21fcc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.821866 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-config" (OuterVolumeSpecName: "config") pod "6eeb640e-fd19-4c09-b322-1aed1fa21fcc" (UID: "6eeb640e-fd19-4c09-b322-1aed1fa21fcc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.825949 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6eeb640e-fd19-4c09-b322-1aed1fa21fcc" (UID: "6eeb640e-fd19-4c09-b322-1aed1fa21fcc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.827927 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6eeb640e-fd19-4c09-b322-1aed1fa21fcc" (UID: "6eeb640e-fd19-4c09-b322-1aed1fa21fcc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.831223 4796 generic.go:334] "Generic (PLEG): container finished" podID="7d100de0-f6c7-4a8c-8c7e-60ed550f3d43" containerID="4d5cc9de55b28e31527b0fdde95c3aae73d3c3d6071c3c4a663d32d515885afb" exitCode=0 Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.831324 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vb5l5-config-jvwzx" event={"ID":"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43","Type":"ContainerDied","Data":"4d5cc9de55b28e31527b0fdde95c3aae73d3c3d6071c3c4a663d32d515885afb"} Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.831365 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vb5l5-config-jvwzx" event={"ID":"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43","Type":"ContainerStarted","Data":"a959fe9b9fd5310056175ab239ff161f421d0b0b418db822fb0853b69dc70041"} Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.834243 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8n89d" event={"ID":"59da3269-88ea-4095-b841-cf1b27cb4274","Type":"ContainerStarted","Data":"4817e7c6935750d26073e0450d91fdceea5fcda49400bec5017a029b2b047ae9"} Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.844319 4796 generic.go:334] "Generic (PLEG): container finished" podID="6eeb640e-fd19-4c09-b322-1aed1fa21fcc" containerID="48077c01c2702955978f8bdaa91830daaa7b76f5152ad6b6e90a7d204e8413e7" exitCode=0 Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.844376 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" event={"ID":"6eeb640e-fd19-4c09-b322-1aed1fa21fcc","Type":"ContainerDied","Data":"48077c01c2702955978f8bdaa91830daaa7b76f5152ad6b6e90a7d204e8413e7"} Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.844394 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" event={"ID":"6eeb640e-fd19-4c09-b322-1aed1fa21fcc","Type":"ContainerDied","Data":"3e9e2c7d11d983f436bea95f068e648b42ec460a68d95e4ad440ba38363d6c16"} Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.844409 4796 scope.go:117] "RemoveContainer" containerID="48077c01c2702955978f8bdaa91830daaa7b76f5152ad6b6e90a7d204e8413e7" Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.844558 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5fbdd8c-tgjp2" Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.859437 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-8n89d" podStartSLOduration=2.834718065 podStartE2EDuration="11.859425617s" podCreationTimestamp="2025-12-05 10:40:59 +0000 UTC" firstStartedPulling="2025-12-05 10:41:00.709525074 +0000 UTC m=+806.997630588" lastFinishedPulling="2025-12-05 10:41:09.734232627 +0000 UTC m=+816.022338140" observedRunningTime="2025-12-05 10:41:10.856472291 +0000 UTC m=+817.144577804" watchObservedRunningTime="2025-12-05 10:41:10.859425617 +0000 UTC m=+817.147531130" Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.892184 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.892209 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.892219 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.892228 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.892252 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7qbr\" (UniqueName: \"kubernetes.io/projected/6eeb640e-fd19-4c09-b322-1aed1fa21fcc-kube-api-access-n7qbr\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.899988 4796 scope.go:117] "RemoveContainer" containerID="8230e065f8b73c6964b412b2d8240d769b663fa5bb26dc9e9e5a9e383a3a0899" Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.907179 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59d5fbdd8c-tgjp2"] Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.913296 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59d5fbdd8c-tgjp2"] Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.931164 4796 scope.go:117] "RemoveContainer" containerID="48077c01c2702955978f8bdaa91830daaa7b76f5152ad6b6e90a7d204e8413e7" Dec 05 10:41:10 crc kubenswrapper[4796]: E1205 10:41:10.931557 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48077c01c2702955978f8bdaa91830daaa7b76f5152ad6b6e90a7d204e8413e7\": container with ID starting with 48077c01c2702955978f8bdaa91830daaa7b76f5152ad6b6e90a7d204e8413e7 not found: ID does not exist" containerID="48077c01c2702955978f8bdaa91830daaa7b76f5152ad6b6e90a7d204e8413e7" Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.931586 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48077c01c2702955978f8bdaa91830daaa7b76f5152ad6b6e90a7d204e8413e7"} err="failed to get container status \"48077c01c2702955978f8bdaa91830daaa7b76f5152ad6b6e90a7d204e8413e7\": rpc error: code = NotFound desc = could not find container \"48077c01c2702955978f8bdaa91830daaa7b76f5152ad6b6e90a7d204e8413e7\": container with ID starting with 48077c01c2702955978f8bdaa91830daaa7b76f5152ad6b6e90a7d204e8413e7 not found: ID does not exist" Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.931608 4796 scope.go:117] "RemoveContainer" containerID="8230e065f8b73c6964b412b2d8240d769b663fa5bb26dc9e9e5a9e383a3a0899" Dec 05 10:41:10 crc kubenswrapper[4796]: E1205 10:41:10.931921 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8230e065f8b73c6964b412b2d8240d769b663fa5bb26dc9e9e5a9e383a3a0899\": container with ID starting with 8230e065f8b73c6964b412b2d8240d769b663fa5bb26dc9e9e5a9e383a3a0899 not found: ID does not exist" containerID="8230e065f8b73c6964b412b2d8240d769b663fa5bb26dc9e9e5a9e383a3a0899" Dec 05 10:41:10 crc kubenswrapper[4796]: I1205 10:41:10.931951 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8230e065f8b73c6964b412b2d8240d769b663fa5bb26dc9e9e5a9e383a3a0899"} err="failed to get container status \"8230e065f8b73c6964b412b2d8240d769b663fa5bb26dc9e9e5a9e383a3a0899\": rpc error: code = NotFound desc = could not find container \"8230e065f8b73c6964b412b2d8240d769b663fa5bb26dc9e9e5a9e383a3a0899\": container with ID starting with 8230e065f8b73c6964b412b2d8240d769b663fa5bb26dc9e9e5a9e383a3a0899 not found: ID does not exist" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.176810 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.421210 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-6dghb"] Dec 05 10:41:11 crc kubenswrapper[4796]: E1205 10:41:11.421502 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eeb640e-fd19-4c09-b322-1aed1fa21fcc" containerName="init" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.421519 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eeb640e-fd19-4c09-b322-1aed1fa21fcc" containerName="init" Dec 05 10:41:11 crc kubenswrapper[4796]: E1205 10:41:11.421554 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eeb640e-fd19-4c09-b322-1aed1fa21fcc" containerName="dnsmasq-dns" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.421561 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eeb640e-fd19-4c09-b322-1aed1fa21fcc" containerName="dnsmasq-dns" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.421722 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eeb640e-fd19-4c09-b322-1aed1fa21fcc" containerName="dnsmasq-dns" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.422243 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6dghb" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.441327 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.444787 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-6dghb"] Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.502191 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfc6j\" (UniqueName: \"kubernetes.io/projected/a535246b-8d73-4814-9eb0-9bc7e04e3414-kube-api-access-nfc6j\") pod \"cinder-db-create-6dghb\" (UID: \"a535246b-8d73-4814-9eb0-9bc7e04e3414\") " pod="openstack/cinder-db-create-6dghb" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.524714 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-kcxnj"] Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.525624 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-kcxnj" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.540991 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-kcxnj"] Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.604023 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7kgz\" (UniqueName: \"kubernetes.io/projected/70929a4c-02e5-48e5-acf6-390bba65c808-kube-api-access-w7kgz\") pod \"barbican-db-create-kcxnj\" (UID: \"70929a4c-02e5-48e5-acf6-390bba65c808\") " pod="openstack/barbican-db-create-kcxnj" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.604089 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfc6j\" (UniqueName: \"kubernetes.io/projected/a535246b-8d73-4814-9eb0-9bc7e04e3414-kube-api-access-nfc6j\") pod \"cinder-db-create-6dghb\" (UID: \"a535246b-8d73-4814-9eb0-9bc7e04e3414\") " pod="openstack/cinder-db-create-6dghb" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.623762 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfc6j\" (UniqueName: \"kubernetes.io/projected/a535246b-8d73-4814-9eb0-9bc7e04e3414-kube-api-access-nfc6j\") pod \"cinder-db-create-6dghb\" (UID: \"a535246b-8d73-4814-9eb0-9bc7e04e3414\") " pod="openstack/cinder-db-create-6dghb" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.659124 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-hqvft"] Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.660183 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hqvft" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.662189 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.662266 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9thmp" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.662419 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.662605 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.664355 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-hqvft"] Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.705879 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdcb30d0-0871-4171-8406-92f4864feac1-config-data\") pod \"keystone-db-sync-hqvft\" (UID: \"bdcb30d0-0871-4171-8406-92f4864feac1\") " pod="openstack/keystone-db-sync-hqvft" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.706078 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hw4c\" (UniqueName: \"kubernetes.io/projected/bdcb30d0-0871-4171-8406-92f4864feac1-kube-api-access-8hw4c\") pod \"keystone-db-sync-hqvft\" (UID: \"bdcb30d0-0871-4171-8406-92f4864feac1\") " pod="openstack/keystone-db-sync-hqvft" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.706205 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7kgz\" (UniqueName: \"kubernetes.io/projected/70929a4c-02e5-48e5-acf6-390bba65c808-kube-api-access-w7kgz\") pod \"barbican-db-create-kcxnj\" (UID: \"70929a4c-02e5-48e5-acf6-390bba65c808\") " pod="openstack/barbican-db-create-kcxnj" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.706292 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdcb30d0-0871-4171-8406-92f4864feac1-combined-ca-bundle\") pod \"keystone-db-sync-hqvft\" (UID: \"bdcb30d0-0871-4171-8406-92f4864feac1\") " pod="openstack/keystone-db-sync-hqvft" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.722717 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7kgz\" (UniqueName: \"kubernetes.io/projected/70929a4c-02e5-48e5-acf6-390bba65c808-kube-api-access-w7kgz\") pod \"barbican-db-create-kcxnj\" (UID: \"70929a4c-02e5-48e5-acf6-390bba65c808\") " pod="openstack/barbican-db-create-kcxnj" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.734825 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6dghb" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.735980 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-fvtdm"] Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.736887 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fvtdm" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.751794 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fvtdm"] Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.807492 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wnk8\" (UniqueName: \"kubernetes.io/projected/e1a87b8d-81a5-468f-9264-a5896daa5960-kube-api-access-9wnk8\") pod \"neutron-db-create-fvtdm\" (UID: \"e1a87b8d-81a5-468f-9264-a5896daa5960\") " pod="openstack/neutron-db-create-fvtdm" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.807546 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdcb30d0-0871-4171-8406-92f4864feac1-config-data\") pod \"keystone-db-sync-hqvft\" (UID: \"bdcb30d0-0871-4171-8406-92f4864feac1\") " pod="openstack/keystone-db-sync-hqvft" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.807574 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hw4c\" (UniqueName: \"kubernetes.io/projected/bdcb30d0-0871-4171-8406-92f4864feac1-kube-api-access-8hw4c\") pod \"keystone-db-sync-hqvft\" (UID: \"bdcb30d0-0871-4171-8406-92f4864feac1\") " pod="openstack/keystone-db-sync-hqvft" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.807637 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdcb30d0-0871-4171-8406-92f4864feac1-combined-ca-bundle\") pod \"keystone-db-sync-hqvft\" (UID: \"bdcb30d0-0871-4171-8406-92f4864feac1\") " pod="openstack/keystone-db-sync-hqvft" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.813909 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdcb30d0-0871-4171-8406-92f4864feac1-combined-ca-bundle\") pod \"keystone-db-sync-hqvft\" (UID: \"bdcb30d0-0871-4171-8406-92f4864feac1\") " pod="openstack/keystone-db-sync-hqvft" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.814053 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdcb30d0-0871-4171-8406-92f4864feac1-config-data\") pod \"keystone-db-sync-hqvft\" (UID: \"bdcb30d0-0871-4171-8406-92f4864feac1\") " pod="openstack/keystone-db-sync-hqvft" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.820472 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hw4c\" (UniqueName: \"kubernetes.io/projected/bdcb30d0-0871-4171-8406-92f4864feac1-kube-api-access-8hw4c\") pod \"keystone-db-sync-hqvft\" (UID: \"bdcb30d0-0871-4171-8406-92f4864feac1\") " pod="openstack/keystone-db-sync-hqvft" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.838033 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-kcxnj" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.908886 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wnk8\" (UniqueName: \"kubernetes.io/projected/e1a87b8d-81a5-468f-9264-a5896daa5960-kube-api-access-9wnk8\") pod \"neutron-db-create-fvtdm\" (UID: \"e1a87b8d-81a5-468f-9264-a5896daa5960\") " pod="openstack/neutron-db-create-fvtdm" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.926592 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wnk8\" (UniqueName: \"kubernetes.io/projected/e1a87b8d-81a5-468f-9264-a5896daa5960-kube-api-access-9wnk8\") pod \"neutron-db-create-fvtdm\" (UID: \"e1a87b8d-81a5-468f-9264-a5896daa5960\") " pod="openstack/neutron-db-create-fvtdm" Dec 05 10:41:11 crc kubenswrapper[4796]: I1205 10:41:11.978111 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hqvft" Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.049954 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eeb640e-fd19-4c09-b322-1aed1fa21fcc" path="/var/lib/kubelet/pods/6eeb640e-fd19-4c09-b322-1aed1fa21fcc/volumes" Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.117234 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vb5l5-config-jvwzx" Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.152480 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fvtdm" Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.182024 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-6dghb"] Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.213626 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-var-log-ovn\") pod \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.213674 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w85rt\" (UniqueName: \"kubernetes.io/projected/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-kube-api-access-w85rt\") pod \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.213745 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "7d100de0-f6c7-4a8c-8c7e-60ed550f3d43" (UID: "7d100de0-f6c7-4a8c-8c7e-60ed550f3d43"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.213784 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-var-run\") pod \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.213867 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-scripts\") pod \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.213879 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-var-run" (OuterVolumeSpecName: "var-run") pod "7d100de0-f6c7-4a8c-8c7e-60ed550f3d43" (UID: "7d100de0-f6c7-4a8c-8c7e-60ed550f3d43"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.213914 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-var-run-ovn\") pod \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.213932 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-additional-scripts\") pod \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\" (UID: \"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43\") " Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.213948 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "7d100de0-f6c7-4a8c-8c7e-60ed550f3d43" (UID: "7d100de0-f6c7-4a8c-8c7e-60ed550f3d43"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.214225 4796 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.214238 4796 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-var-run\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.214246 4796 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.214553 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "7d100de0-f6c7-4a8c-8c7e-60ed550f3d43" (UID: "7d100de0-f6c7-4a8c-8c7e-60ed550f3d43"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.214632 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-scripts" (OuterVolumeSpecName: "scripts") pod "7d100de0-f6c7-4a8c-8c7e-60ed550f3d43" (UID: "7d100de0-f6c7-4a8c-8c7e-60ed550f3d43"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.219748 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-kube-api-access-w85rt" (OuterVolumeSpecName: "kube-api-access-w85rt") pod "7d100de0-f6c7-4a8c-8c7e-60ed550f3d43" (UID: "7d100de0-f6c7-4a8c-8c7e-60ed550f3d43"). InnerVolumeSpecName "kube-api-access-w85rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.297378 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-kcxnj"] Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.315668 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.315725 4796 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.315738 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w85rt\" (UniqueName: \"kubernetes.io/projected/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43-kube-api-access-w85rt\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.445975 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-hqvft"] Dec 05 10:41:12 crc kubenswrapper[4796]: W1205 10:41:12.450021 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdcb30d0_0871_4171_8406_92f4864feac1.slice/crio-560292698b3c62eeaee175eaa8b8c3529cd33c1cf8922977391775361d6d8080 WatchSource:0}: Error finding container 560292698b3c62eeaee175eaa8b8c3529cd33c1cf8922977391775361d6d8080: Status 404 returned error can't find the container with id 560292698b3c62eeaee175eaa8b8c3529cd33c1cf8922977391775361d6d8080 Dec 05 10:41:12 crc kubenswrapper[4796]: W1205 10:41:12.635385 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1a87b8d_81a5_468f_9264_a5896daa5960.slice/crio-90482f7237d10ed2abaf102ac25612c581ee1233ee3620575815b8f1bbd9171a WatchSource:0}: Error finding container 90482f7237d10ed2abaf102ac25612c581ee1233ee3620575815b8f1bbd9171a: Status 404 returned error can't find the container with id 90482f7237d10ed2abaf102ac25612c581ee1233ee3620575815b8f1bbd9171a Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.641987 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fvtdm"] Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.862236 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vb5l5-config-jvwzx" event={"ID":"7d100de0-f6c7-4a8c-8c7e-60ed550f3d43","Type":"ContainerDied","Data":"a959fe9b9fd5310056175ab239ff161f421d0b0b418db822fb0853b69dc70041"} Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.862456 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a959fe9b9fd5310056175ab239ff161f421d0b0b418db822fb0853b69dc70041" Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.862459 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vb5l5-config-jvwzx" Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.863656 4796 generic.go:334] "Generic (PLEG): container finished" podID="70929a4c-02e5-48e5-acf6-390bba65c808" containerID="13874658bc680a6f0ba387c012debd3208f9c94e630ac766695beeb4de48117e" exitCode=0 Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.863717 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-kcxnj" event={"ID":"70929a4c-02e5-48e5-acf6-390bba65c808","Type":"ContainerDied","Data":"13874658bc680a6f0ba387c012debd3208f9c94e630ac766695beeb4de48117e"} Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.863734 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-kcxnj" event={"ID":"70929a4c-02e5-48e5-acf6-390bba65c808","Type":"ContainerStarted","Data":"2fd18f5ae00af53ba83e99deb2b947e2f6ea1c91248dbdb2a3b885a04839395a"} Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.864746 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hqvft" event={"ID":"bdcb30d0-0871-4171-8406-92f4864feac1","Type":"ContainerStarted","Data":"560292698b3c62eeaee175eaa8b8c3529cd33c1cf8922977391775361d6d8080"} Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.865570 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fvtdm" event={"ID":"e1a87b8d-81a5-468f-9264-a5896daa5960","Type":"ContainerStarted","Data":"90482f7237d10ed2abaf102ac25612c581ee1233ee3620575815b8f1bbd9171a"} Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.867892 4796 generic.go:334] "Generic (PLEG): container finished" podID="a535246b-8d73-4814-9eb0-9bc7e04e3414" containerID="adf52932c809ee3e9a63bb672c5648e8a16e65bea39a62441c2a0762117988d6" exitCode=0 Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.867933 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6dghb" event={"ID":"a535246b-8d73-4814-9eb0-9bc7e04e3414","Type":"ContainerDied","Data":"adf52932c809ee3e9a63bb672c5648e8a16e65bea39a62441c2a0762117988d6"} Dec 05 10:41:12 crc kubenswrapper[4796]: I1205 10:41:12.867960 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6dghb" event={"ID":"a535246b-8d73-4814-9eb0-9bc7e04e3414","Type":"ContainerStarted","Data":"df323bdba4b222498fa2807aa1231ea8322493d280c2f44cbb0cb35ed5800bc0"} Dec 05 10:41:13 crc kubenswrapper[4796]: I1205 10:41:13.204822 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-vb5l5-config-jvwzx"] Dec 05 10:41:13 crc kubenswrapper[4796]: I1205 10:41:13.205709 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-vb5l5-config-jvwzx"] Dec 05 10:41:13 crc kubenswrapper[4796]: I1205 10:41:13.877192 4796 generic.go:334] "Generic (PLEG): container finished" podID="e1a87b8d-81a5-468f-9264-a5896daa5960" containerID="b98a37052e555bf078066c2b1848cd39f0f9b716503f10f7a9948aceaf3f4acd" exitCode=0 Dec 05 10:41:13 crc kubenswrapper[4796]: I1205 10:41:13.877283 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fvtdm" event={"ID":"e1a87b8d-81a5-468f-9264-a5896daa5960","Type":"ContainerDied","Data":"b98a37052e555bf078066c2b1848cd39f0f9b716503f10f7a9948aceaf3f4acd"} Dec 05 10:41:14 crc kubenswrapper[4796]: I1205 10:41:14.041967 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d100de0-f6c7-4a8c-8c7e-60ed550f3d43" path="/var/lib/kubelet/pods/7d100de0-f6c7-4a8c-8c7e-60ed550f3d43/volumes" Dec 05 10:41:14 crc kubenswrapper[4796]: I1205 10:41:14.146079 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-kcxnj" Dec 05 10:41:14 crc kubenswrapper[4796]: I1205 10:41:14.149763 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6dghb" Dec 05 10:41:14 crc kubenswrapper[4796]: I1205 10:41:14.241923 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfc6j\" (UniqueName: \"kubernetes.io/projected/a535246b-8d73-4814-9eb0-9bc7e04e3414-kube-api-access-nfc6j\") pod \"a535246b-8d73-4814-9eb0-9bc7e04e3414\" (UID: \"a535246b-8d73-4814-9eb0-9bc7e04e3414\") " Dec 05 10:41:14 crc kubenswrapper[4796]: I1205 10:41:14.241986 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7kgz\" (UniqueName: \"kubernetes.io/projected/70929a4c-02e5-48e5-acf6-390bba65c808-kube-api-access-w7kgz\") pod \"70929a4c-02e5-48e5-acf6-390bba65c808\" (UID: \"70929a4c-02e5-48e5-acf6-390bba65c808\") " Dec 05 10:41:14 crc kubenswrapper[4796]: I1205 10:41:14.246529 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a535246b-8d73-4814-9eb0-9bc7e04e3414-kube-api-access-nfc6j" (OuterVolumeSpecName: "kube-api-access-nfc6j") pod "a535246b-8d73-4814-9eb0-9bc7e04e3414" (UID: "a535246b-8d73-4814-9eb0-9bc7e04e3414"). InnerVolumeSpecName "kube-api-access-nfc6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:41:14 crc kubenswrapper[4796]: I1205 10:41:14.251577 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70929a4c-02e5-48e5-acf6-390bba65c808-kube-api-access-w7kgz" (OuterVolumeSpecName: "kube-api-access-w7kgz") pod "70929a4c-02e5-48e5-acf6-390bba65c808" (UID: "70929a4c-02e5-48e5-acf6-390bba65c808"). InnerVolumeSpecName "kube-api-access-w7kgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:41:14 crc kubenswrapper[4796]: I1205 10:41:14.344300 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfc6j\" (UniqueName: \"kubernetes.io/projected/a535246b-8d73-4814-9eb0-9bc7e04e3414-kube-api-access-nfc6j\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:14 crc kubenswrapper[4796]: I1205 10:41:14.344319 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7kgz\" (UniqueName: \"kubernetes.io/projected/70929a4c-02e5-48e5-acf6-390bba65c808-kube-api-access-w7kgz\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:14 crc kubenswrapper[4796]: I1205 10:41:14.885452 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-kcxnj" Dec 05 10:41:14 crc kubenswrapper[4796]: I1205 10:41:14.885439 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-kcxnj" event={"ID":"70929a4c-02e5-48e5-acf6-390bba65c808","Type":"ContainerDied","Data":"2fd18f5ae00af53ba83e99deb2b947e2f6ea1c91248dbdb2a3b885a04839395a"} Dec 05 10:41:14 crc kubenswrapper[4796]: I1205 10:41:14.885613 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fd18f5ae00af53ba83e99deb2b947e2f6ea1c91248dbdb2a3b885a04839395a" Dec 05 10:41:14 crc kubenswrapper[4796]: I1205 10:41:14.887190 4796 generic.go:334] "Generic (PLEG): container finished" podID="59da3269-88ea-4095-b841-cf1b27cb4274" containerID="4817e7c6935750d26073e0450d91fdceea5fcda49400bec5017a029b2b047ae9" exitCode=0 Dec 05 10:41:14 crc kubenswrapper[4796]: I1205 10:41:14.887273 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8n89d" event={"ID":"59da3269-88ea-4095-b841-cf1b27cb4274","Type":"ContainerDied","Data":"4817e7c6935750d26073e0450d91fdceea5fcda49400bec5017a029b2b047ae9"} Dec 05 10:41:14 crc kubenswrapper[4796]: I1205 10:41:14.890756 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-6dghb" Dec 05 10:41:14 crc kubenswrapper[4796]: I1205 10:41:14.895873 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-6dghb" event={"ID":"a535246b-8d73-4814-9eb0-9bc7e04e3414","Type":"ContainerDied","Data":"df323bdba4b222498fa2807aa1231ea8322493d280c2f44cbb0cb35ed5800bc0"} Dec 05 10:41:14 crc kubenswrapper[4796]: I1205 10:41:14.895921 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df323bdba4b222498fa2807aa1231ea8322493d280c2f44cbb0cb35ed5800bc0" Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.063991 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fvtdm" Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.172069 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wnk8\" (UniqueName: \"kubernetes.io/projected/e1a87b8d-81a5-468f-9264-a5896daa5960-kube-api-access-9wnk8\") pod \"e1a87b8d-81a5-468f-9264-a5896daa5960\" (UID: \"e1a87b8d-81a5-468f-9264-a5896daa5960\") " Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.173668 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8n89d" Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.176271 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1a87b8d-81a5-468f-9264-a5896daa5960-kube-api-access-9wnk8" (OuterVolumeSpecName: "kube-api-access-9wnk8") pod "e1a87b8d-81a5-468f-9264-a5896daa5960" (UID: "e1a87b8d-81a5-468f-9264-a5896daa5960"). InnerVolumeSpecName "kube-api-access-9wnk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.273618 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4985p\" (UniqueName: \"kubernetes.io/projected/59da3269-88ea-4095-b841-cf1b27cb4274-kube-api-access-4985p\") pod \"59da3269-88ea-4095-b841-cf1b27cb4274\" (UID: \"59da3269-88ea-4095-b841-cf1b27cb4274\") " Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.273974 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59da3269-88ea-4095-b841-cf1b27cb4274-combined-ca-bundle\") pod \"59da3269-88ea-4095-b841-cf1b27cb4274\" (UID: \"59da3269-88ea-4095-b841-cf1b27cb4274\") " Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.274051 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59da3269-88ea-4095-b841-cf1b27cb4274-config-data\") pod \"59da3269-88ea-4095-b841-cf1b27cb4274\" (UID: \"59da3269-88ea-4095-b841-cf1b27cb4274\") " Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.274081 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/59da3269-88ea-4095-b841-cf1b27cb4274-db-sync-config-data\") pod \"59da3269-88ea-4095-b841-cf1b27cb4274\" (UID: \"59da3269-88ea-4095-b841-cf1b27cb4274\") " Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.274437 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wnk8\" (UniqueName: \"kubernetes.io/projected/e1a87b8d-81a5-468f-9264-a5896daa5960-kube-api-access-9wnk8\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.277219 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59da3269-88ea-4095-b841-cf1b27cb4274-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "59da3269-88ea-4095-b841-cf1b27cb4274" (UID: "59da3269-88ea-4095-b841-cf1b27cb4274"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.277380 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59da3269-88ea-4095-b841-cf1b27cb4274-kube-api-access-4985p" (OuterVolumeSpecName: "kube-api-access-4985p") pod "59da3269-88ea-4095-b841-cf1b27cb4274" (UID: "59da3269-88ea-4095-b841-cf1b27cb4274"). InnerVolumeSpecName "kube-api-access-4985p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.292426 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59da3269-88ea-4095-b841-cf1b27cb4274-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59da3269-88ea-4095-b841-cf1b27cb4274" (UID: "59da3269-88ea-4095-b841-cf1b27cb4274"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.305100 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59da3269-88ea-4095-b841-cf1b27cb4274-config-data" (OuterVolumeSpecName: "config-data") pod "59da3269-88ea-4095-b841-cf1b27cb4274" (UID: "59da3269-88ea-4095-b841-cf1b27cb4274"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.375848 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4985p\" (UniqueName: \"kubernetes.io/projected/59da3269-88ea-4095-b841-cf1b27cb4274-kube-api-access-4985p\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.375870 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59da3269-88ea-4095-b841-cf1b27cb4274-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.375879 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59da3269-88ea-4095-b841-cf1b27cb4274-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.375887 4796 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/59da3269-88ea-4095-b841-cf1b27cb4274-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.904080 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8n89d" event={"ID":"59da3269-88ea-4095-b841-cf1b27cb4274","Type":"ContainerDied","Data":"f8270895446b2189ea25c1699f64d60345edf40f9347405bbb419f135318a13e"} Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.904121 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8270895446b2189ea25c1699f64d60345edf40f9347405bbb419f135318a13e" Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.904348 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8n89d" Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.905468 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hqvft" event={"ID":"bdcb30d0-0871-4171-8406-92f4864feac1","Type":"ContainerStarted","Data":"2b4644ac70c8362629a40b98893046e4e8ec6d28e082f9d683a251cc594a5213"} Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.906728 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fvtdm" event={"ID":"e1a87b8d-81a5-468f-9264-a5896daa5960","Type":"ContainerDied","Data":"90482f7237d10ed2abaf102ac25612c581ee1233ee3620575815b8f1bbd9171a"} Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.906750 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90482f7237d10ed2abaf102ac25612c581ee1233ee3620575815b8f1bbd9171a" Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.906783 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fvtdm" Dec 05 10:41:16 crc kubenswrapper[4796]: I1205 10:41:16.925993 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-hqvft" podStartSLOduration=2.373732675 podStartE2EDuration="5.925976585s" podCreationTimestamp="2025-12-05 10:41:11 +0000 UTC" firstStartedPulling="2025-12-05 10:41:12.452045299 +0000 UTC m=+818.740150813" lastFinishedPulling="2025-12-05 10:41:16.004289209 +0000 UTC m=+822.292394723" observedRunningTime="2025-12-05 10:41:16.918289974 +0000 UTC m=+823.206395487" watchObservedRunningTime="2025-12-05 10:41:16.925976585 +0000 UTC m=+823.214082098" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.143917 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d879466b9-hwd7g"] Dec 05 10:41:17 crc kubenswrapper[4796]: E1205 10:41:17.144432 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59da3269-88ea-4095-b841-cf1b27cb4274" containerName="glance-db-sync" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.144444 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="59da3269-88ea-4095-b841-cf1b27cb4274" containerName="glance-db-sync" Dec 05 10:41:17 crc kubenswrapper[4796]: E1205 10:41:17.144456 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a535246b-8d73-4814-9eb0-9bc7e04e3414" containerName="mariadb-database-create" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.144462 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a535246b-8d73-4814-9eb0-9bc7e04e3414" containerName="mariadb-database-create" Dec 05 10:41:17 crc kubenswrapper[4796]: E1205 10:41:17.144477 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d100de0-f6c7-4a8c-8c7e-60ed550f3d43" containerName="ovn-config" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.144482 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d100de0-f6c7-4a8c-8c7e-60ed550f3d43" containerName="ovn-config" Dec 05 10:41:17 crc kubenswrapper[4796]: E1205 10:41:17.144493 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a87b8d-81a5-468f-9264-a5896daa5960" containerName="mariadb-database-create" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.144498 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a87b8d-81a5-468f-9264-a5896daa5960" containerName="mariadb-database-create" Dec 05 10:41:17 crc kubenswrapper[4796]: E1205 10:41:17.144512 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70929a4c-02e5-48e5-acf6-390bba65c808" containerName="mariadb-database-create" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.144518 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="70929a4c-02e5-48e5-acf6-390bba65c808" containerName="mariadb-database-create" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.144697 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a535246b-8d73-4814-9eb0-9bc7e04e3414" containerName="mariadb-database-create" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.144708 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="59da3269-88ea-4095-b841-cf1b27cb4274" containerName="glance-db-sync" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.144718 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="70929a4c-02e5-48e5-acf6-390bba65c808" containerName="mariadb-database-create" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.144726 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a87b8d-81a5-468f-9264-a5896daa5960" containerName="mariadb-database-create" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.144737 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d100de0-f6c7-4a8c-8c7e-60ed550f3d43" containerName="ovn-config" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.145807 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d879466b9-hwd7g" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.162261 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d879466b9-hwd7g"] Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.287897 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-dns-svc\") pod \"dnsmasq-dns-d879466b9-hwd7g\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " pod="openstack/dnsmasq-dns-d879466b9-hwd7g" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.287940 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-config\") pod \"dnsmasq-dns-d879466b9-hwd7g\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " pod="openstack/dnsmasq-dns-d879466b9-hwd7g" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.287974 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vhnn\" (UniqueName: \"kubernetes.io/projected/2f8de231-4c1d-4895-9626-5e639b82e02e-kube-api-access-8vhnn\") pod \"dnsmasq-dns-d879466b9-hwd7g\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " pod="openstack/dnsmasq-dns-d879466b9-hwd7g" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.287998 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-ovsdbserver-nb\") pod \"dnsmasq-dns-d879466b9-hwd7g\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " pod="openstack/dnsmasq-dns-d879466b9-hwd7g" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.288015 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-ovsdbserver-sb\") pod \"dnsmasq-dns-d879466b9-hwd7g\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " pod="openstack/dnsmasq-dns-d879466b9-hwd7g" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.288050 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-dns-swift-storage-0\") pod \"dnsmasq-dns-d879466b9-hwd7g\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " pod="openstack/dnsmasq-dns-d879466b9-hwd7g" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.388900 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-dns-svc\") pod \"dnsmasq-dns-d879466b9-hwd7g\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " pod="openstack/dnsmasq-dns-d879466b9-hwd7g" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.388936 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-config\") pod \"dnsmasq-dns-d879466b9-hwd7g\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " pod="openstack/dnsmasq-dns-d879466b9-hwd7g" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.388970 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vhnn\" (UniqueName: \"kubernetes.io/projected/2f8de231-4c1d-4895-9626-5e639b82e02e-kube-api-access-8vhnn\") pod \"dnsmasq-dns-d879466b9-hwd7g\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " pod="openstack/dnsmasq-dns-d879466b9-hwd7g" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.388987 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-ovsdbserver-nb\") pod \"dnsmasq-dns-d879466b9-hwd7g\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " pod="openstack/dnsmasq-dns-d879466b9-hwd7g" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.389004 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-ovsdbserver-sb\") pod \"dnsmasq-dns-d879466b9-hwd7g\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " pod="openstack/dnsmasq-dns-d879466b9-hwd7g" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.389035 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-dns-swift-storage-0\") pod \"dnsmasq-dns-d879466b9-hwd7g\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " pod="openstack/dnsmasq-dns-d879466b9-hwd7g" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.389829 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-dns-swift-storage-0\") pod \"dnsmasq-dns-d879466b9-hwd7g\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " pod="openstack/dnsmasq-dns-d879466b9-hwd7g" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.390414 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-dns-svc\") pod \"dnsmasq-dns-d879466b9-hwd7g\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " pod="openstack/dnsmasq-dns-d879466b9-hwd7g" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.390621 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-ovsdbserver-nb\") pod \"dnsmasq-dns-d879466b9-hwd7g\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " pod="openstack/dnsmasq-dns-d879466b9-hwd7g" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.390627 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-ovsdbserver-sb\") pod \"dnsmasq-dns-d879466b9-hwd7g\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " pod="openstack/dnsmasq-dns-d879466b9-hwd7g" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.391075 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-config\") pod \"dnsmasq-dns-d879466b9-hwd7g\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " pod="openstack/dnsmasq-dns-d879466b9-hwd7g" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.410232 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vhnn\" (UniqueName: \"kubernetes.io/projected/2f8de231-4c1d-4895-9626-5e639b82e02e-kube-api-access-8vhnn\") pod \"dnsmasq-dns-d879466b9-hwd7g\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " pod="openstack/dnsmasq-dns-d879466b9-hwd7g" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.458521 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d879466b9-hwd7g" Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.828312 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d879466b9-hwd7g"] Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.917241 4796 generic.go:334] "Generic (PLEG): container finished" podID="bdcb30d0-0871-4171-8406-92f4864feac1" containerID="2b4644ac70c8362629a40b98893046e4e8ec6d28e082f9d683a251cc594a5213" exitCode=0 Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.917301 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hqvft" event={"ID":"bdcb30d0-0871-4171-8406-92f4864feac1","Type":"ContainerDied","Data":"2b4644ac70c8362629a40b98893046e4e8ec6d28e082f9d683a251cc594a5213"} Dec 05 10:41:17 crc kubenswrapper[4796]: I1205 10:41:17.918759 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d879466b9-hwd7g" event={"ID":"2f8de231-4c1d-4895-9626-5e639b82e02e","Type":"ContainerStarted","Data":"0322614546ecefafe1079408faa9cf1e3ef961e0f16d2804cc9d2332d93f07b0"} Dec 05 10:41:18 crc kubenswrapper[4796]: I1205 10:41:18.475050 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7ddql"] Dec 05 10:41:18 crc kubenswrapper[4796]: I1205 10:41:18.476764 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ddql" Dec 05 10:41:18 crc kubenswrapper[4796]: I1205 10:41:18.481954 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7ddql"] Dec 05 10:41:18 crc kubenswrapper[4796]: I1205 10:41:18.604726 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbb5h\" (UniqueName: \"kubernetes.io/projected/4c607361-8750-4197-ab5d-071d2d63ba1f-kube-api-access-sbb5h\") pod \"community-operators-7ddql\" (UID: \"4c607361-8750-4197-ab5d-071d2d63ba1f\") " pod="openshift-marketplace/community-operators-7ddql" Dec 05 10:41:18 crc kubenswrapper[4796]: I1205 10:41:18.604805 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c607361-8750-4197-ab5d-071d2d63ba1f-utilities\") pod \"community-operators-7ddql\" (UID: \"4c607361-8750-4197-ab5d-071d2d63ba1f\") " pod="openshift-marketplace/community-operators-7ddql" Dec 05 10:41:18 crc kubenswrapper[4796]: I1205 10:41:18.604824 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c607361-8750-4197-ab5d-071d2d63ba1f-catalog-content\") pod \"community-operators-7ddql\" (UID: \"4c607361-8750-4197-ab5d-071d2d63ba1f\") " pod="openshift-marketplace/community-operators-7ddql" Dec 05 10:41:18 crc kubenswrapper[4796]: I1205 10:41:18.706076 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbb5h\" (UniqueName: \"kubernetes.io/projected/4c607361-8750-4197-ab5d-071d2d63ba1f-kube-api-access-sbb5h\") pod \"community-operators-7ddql\" (UID: \"4c607361-8750-4197-ab5d-071d2d63ba1f\") " pod="openshift-marketplace/community-operators-7ddql" Dec 05 10:41:18 crc kubenswrapper[4796]: I1205 10:41:18.706216 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c607361-8750-4197-ab5d-071d2d63ba1f-utilities\") pod \"community-operators-7ddql\" (UID: \"4c607361-8750-4197-ab5d-071d2d63ba1f\") " pod="openshift-marketplace/community-operators-7ddql" Dec 05 10:41:18 crc kubenswrapper[4796]: I1205 10:41:18.706239 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c607361-8750-4197-ab5d-071d2d63ba1f-catalog-content\") pod \"community-operators-7ddql\" (UID: \"4c607361-8750-4197-ab5d-071d2d63ba1f\") " pod="openshift-marketplace/community-operators-7ddql" Dec 05 10:41:18 crc kubenswrapper[4796]: I1205 10:41:18.706776 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c607361-8750-4197-ab5d-071d2d63ba1f-catalog-content\") pod \"community-operators-7ddql\" (UID: \"4c607361-8750-4197-ab5d-071d2d63ba1f\") " pod="openshift-marketplace/community-operators-7ddql" Dec 05 10:41:18 crc kubenswrapper[4796]: I1205 10:41:18.706837 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c607361-8750-4197-ab5d-071d2d63ba1f-utilities\") pod \"community-operators-7ddql\" (UID: \"4c607361-8750-4197-ab5d-071d2d63ba1f\") " pod="openshift-marketplace/community-operators-7ddql" Dec 05 10:41:18 crc kubenswrapper[4796]: I1205 10:41:18.724346 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbb5h\" (UniqueName: \"kubernetes.io/projected/4c607361-8750-4197-ab5d-071d2d63ba1f-kube-api-access-sbb5h\") pod \"community-operators-7ddql\" (UID: \"4c607361-8750-4197-ab5d-071d2d63ba1f\") " pod="openshift-marketplace/community-operators-7ddql" Dec 05 10:41:18 crc kubenswrapper[4796]: I1205 10:41:18.790028 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ddql" Dec 05 10:41:18 crc kubenswrapper[4796]: I1205 10:41:18.931863 4796 generic.go:334] "Generic (PLEG): container finished" podID="2f8de231-4c1d-4895-9626-5e639b82e02e" containerID="3f186b966a6eb883079b17a1d3993fc3374a562e5a349cc1cdc876b8df78141c" exitCode=0 Dec 05 10:41:18 crc kubenswrapper[4796]: I1205 10:41:18.931957 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d879466b9-hwd7g" event={"ID":"2f8de231-4c1d-4895-9626-5e639b82e02e","Type":"ContainerDied","Data":"3f186b966a6eb883079b17a1d3993fc3374a562e5a349cc1cdc876b8df78141c"} Dec 05 10:41:19 crc kubenswrapper[4796]: I1205 10:41:19.168538 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7ddql"] Dec 05 10:41:19 crc kubenswrapper[4796]: W1205 10:41:19.170539 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c607361_8750_4197_ab5d_071d2d63ba1f.slice/crio-16216d38d4f7724951b78b28e6b7f0be22dd275847a26fd9625d4ee48d80633d WatchSource:0}: Error finding container 16216d38d4f7724951b78b28e6b7f0be22dd275847a26fd9625d4ee48d80633d: Status 404 returned error can't find the container with id 16216d38d4f7724951b78b28e6b7f0be22dd275847a26fd9625d4ee48d80633d Dec 05 10:41:19 crc kubenswrapper[4796]: I1205 10:41:19.273901 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hqvft" Dec 05 10:41:19 crc kubenswrapper[4796]: I1205 10:41:19.315175 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hw4c\" (UniqueName: \"kubernetes.io/projected/bdcb30d0-0871-4171-8406-92f4864feac1-kube-api-access-8hw4c\") pod \"bdcb30d0-0871-4171-8406-92f4864feac1\" (UID: \"bdcb30d0-0871-4171-8406-92f4864feac1\") " Dec 05 10:41:19 crc kubenswrapper[4796]: I1205 10:41:19.315228 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdcb30d0-0871-4171-8406-92f4864feac1-config-data\") pod \"bdcb30d0-0871-4171-8406-92f4864feac1\" (UID: \"bdcb30d0-0871-4171-8406-92f4864feac1\") " Dec 05 10:41:19 crc kubenswrapper[4796]: I1205 10:41:19.315267 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdcb30d0-0871-4171-8406-92f4864feac1-combined-ca-bundle\") pod \"bdcb30d0-0871-4171-8406-92f4864feac1\" (UID: \"bdcb30d0-0871-4171-8406-92f4864feac1\") " Dec 05 10:41:19 crc kubenswrapper[4796]: I1205 10:41:19.319474 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdcb30d0-0871-4171-8406-92f4864feac1-kube-api-access-8hw4c" (OuterVolumeSpecName: "kube-api-access-8hw4c") pod "bdcb30d0-0871-4171-8406-92f4864feac1" (UID: "bdcb30d0-0871-4171-8406-92f4864feac1"). InnerVolumeSpecName "kube-api-access-8hw4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:41:19 crc kubenswrapper[4796]: I1205 10:41:19.336819 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdcb30d0-0871-4171-8406-92f4864feac1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdcb30d0-0871-4171-8406-92f4864feac1" (UID: "bdcb30d0-0871-4171-8406-92f4864feac1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:19 crc kubenswrapper[4796]: I1205 10:41:19.347814 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdcb30d0-0871-4171-8406-92f4864feac1-config-data" (OuterVolumeSpecName: "config-data") pod "bdcb30d0-0871-4171-8406-92f4864feac1" (UID: "bdcb30d0-0871-4171-8406-92f4864feac1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:19 crc kubenswrapper[4796]: I1205 10:41:19.416461 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdcb30d0-0871-4171-8406-92f4864feac1-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:19 crc kubenswrapper[4796]: I1205 10:41:19.416484 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdcb30d0-0871-4171-8406-92f4864feac1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:19 crc kubenswrapper[4796]: I1205 10:41:19.416495 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hw4c\" (UniqueName: \"kubernetes.io/projected/bdcb30d0-0871-4171-8406-92f4864feac1-kube-api-access-8hw4c\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:19 crc kubenswrapper[4796]: I1205 10:41:19.938542 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d879466b9-hwd7g" event={"ID":"2f8de231-4c1d-4895-9626-5e639b82e02e","Type":"ContainerStarted","Data":"065c165a6b894700c5d5cdbb16945507d3068b2e69d8298c69dda717e830bf13"} Dec 05 10:41:19 crc kubenswrapper[4796]: I1205 10:41:19.939574 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d879466b9-hwd7g" Dec 05 10:41:19 crc kubenswrapper[4796]: I1205 10:41:19.939833 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-hqvft" Dec 05 10:41:19 crc kubenswrapper[4796]: I1205 10:41:19.939903 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-hqvft" event={"ID":"bdcb30d0-0871-4171-8406-92f4864feac1","Type":"ContainerDied","Data":"560292698b3c62eeaee175eaa8b8c3529cd33c1cf8922977391775361d6d8080"} Dec 05 10:41:19 crc kubenswrapper[4796]: I1205 10:41:19.939939 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="560292698b3c62eeaee175eaa8b8c3529cd33c1cf8922977391775361d6d8080" Dec 05 10:41:19 crc kubenswrapper[4796]: I1205 10:41:19.942139 4796 generic.go:334] "Generic (PLEG): container finished" podID="4c607361-8750-4197-ab5d-071d2d63ba1f" containerID="2c8af599a309e4cde1de3ef06f4e4f8241cf65c003a248f531baffe6738311a5" exitCode=0 Dec 05 10:41:19 crc kubenswrapper[4796]: I1205 10:41:19.942161 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ddql" event={"ID":"4c607361-8750-4197-ab5d-071d2d63ba1f","Type":"ContainerDied","Data":"2c8af599a309e4cde1de3ef06f4e4f8241cf65c003a248f531baffe6738311a5"} Dec 05 10:41:19 crc kubenswrapper[4796]: I1205 10:41:19.942186 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ddql" event={"ID":"4c607361-8750-4197-ab5d-071d2d63ba1f","Type":"ContainerStarted","Data":"16216d38d4f7724951b78b28e6b7f0be22dd275847a26fd9625d4ee48d80633d"} Dec 05 10:41:19 crc kubenswrapper[4796]: I1205 10:41:19.958440 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d879466b9-hwd7g" podStartSLOduration=2.9584292960000003 podStartE2EDuration="2.958429296s" podCreationTimestamp="2025-12-05 10:41:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:41:19.951199915 +0000 UTC m=+826.239305428" watchObservedRunningTime="2025-12-05 10:41:19.958429296 +0000 UTC m=+826.246534809" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.084918 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d879466b9-hwd7g"] Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.110175 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5689975857-jdl2f"] Dec 05 10:41:20 crc kubenswrapper[4796]: E1205 10:41:20.110496 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdcb30d0-0871-4171-8406-92f4864feac1" containerName="keystone-db-sync" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.110513 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdcb30d0-0871-4171-8406-92f4864feac1" containerName="keystone-db-sync" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.110633 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdcb30d0-0871-4171-8406-92f4864feac1" containerName="keystone-db-sync" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.111413 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5689975857-jdl2f" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.122460 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5689975857-jdl2f"] Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.138271 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dfsrl"] Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.144420 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dfsrl" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.146831 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9thmp" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.146839 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.154402 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.168364 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.197454 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dfsrl"] Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.234432 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnbg4\" (UniqueName: \"kubernetes.io/projected/cf200531-c073-4d03-916a-cb3f54c5aa89-kube-api-access-gnbg4\") pod \"keystone-bootstrap-dfsrl\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " pod="openstack/keystone-bootstrap-dfsrl" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.234537 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-dns-swift-storage-0\") pod \"dnsmasq-dns-5689975857-jdl2f\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " pod="openstack/dnsmasq-dns-5689975857-jdl2f" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.242955 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-dns-svc\") pod \"dnsmasq-dns-5689975857-jdl2f\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " pod="openstack/dnsmasq-dns-5689975857-jdl2f" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.243123 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crxbh\" (UniqueName: \"kubernetes.io/projected/176656fb-70e6-424e-a054-8a3279e3a8fe-kube-api-access-crxbh\") pod \"dnsmasq-dns-5689975857-jdl2f\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " pod="openstack/dnsmasq-dns-5689975857-jdl2f" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.243347 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-config-data\") pod \"keystone-bootstrap-dfsrl\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " pod="openstack/keystone-bootstrap-dfsrl" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.243383 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-ovsdbserver-nb\") pod \"dnsmasq-dns-5689975857-jdl2f\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " pod="openstack/dnsmasq-dns-5689975857-jdl2f" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.243419 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-scripts\") pod \"keystone-bootstrap-dfsrl\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " pod="openstack/keystone-bootstrap-dfsrl" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.243448 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-ovsdbserver-sb\") pod \"dnsmasq-dns-5689975857-jdl2f\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " pod="openstack/dnsmasq-dns-5689975857-jdl2f" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.243526 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-credential-keys\") pod \"keystone-bootstrap-dfsrl\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " pod="openstack/keystone-bootstrap-dfsrl" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.243548 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-combined-ca-bundle\") pod \"keystone-bootstrap-dfsrl\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " pod="openstack/keystone-bootstrap-dfsrl" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.243604 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-config\") pod \"dnsmasq-dns-5689975857-jdl2f\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " pod="openstack/dnsmasq-dns-5689975857-jdl2f" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.243710 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-fernet-keys\") pod \"keystone-bootstrap-dfsrl\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " pod="openstack/keystone-bootstrap-dfsrl" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.308746 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8549c6756f-mnmb6"] Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.310072 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8549c6756f-mnmb6" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.319096 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.319251 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-7ztp2" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.319373 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.319508 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.330053 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8549c6756f-mnmb6"] Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.345549 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-ovsdbserver-nb\") pod \"dnsmasq-dns-5689975857-jdl2f\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " pod="openstack/dnsmasq-dns-5689975857-jdl2f" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.345588 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-scripts\") pod \"keystone-bootstrap-dfsrl\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " pod="openstack/keystone-bootstrap-dfsrl" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.345616 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-ovsdbserver-sb\") pod \"dnsmasq-dns-5689975857-jdl2f\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " pod="openstack/dnsmasq-dns-5689975857-jdl2f" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.345662 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-credential-keys\") pod \"keystone-bootstrap-dfsrl\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " pod="openstack/keystone-bootstrap-dfsrl" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.345694 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-combined-ca-bundle\") pod \"keystone-bootstrap-dfsrl\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " pod="openstack/keystone-bootstrap-dfsrl" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.345716 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-config\") pod \"dnsmasq-dns-5689975857-jdl2f\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " pod="openstack/dnsmasq-dns-5689975857-jdl2f" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.345758 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-fernet-keys\") pod \"keystone-bootstrap-dfsrl\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " pod="openstack/keystone-bootstrap-dfsrl" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.345795 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnbg4\" (UniqueName: \"kubernetes.io/projected/cf200531-c073-4d03-916a-cb3f54c5aa89-kube-api-access-gnbg4\") pod \"keystone-bootstrap-dfsrl\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " pod="openstack/keystone-bootstrap-dfsrl" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.345836 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-dns-swift-storage-0\") pod \"dnsmasq-dns-5689975857-jdl2f\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " pod="openstack/dnsmasq-dns-5689975857-jdl2f" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.345864 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-dns-svc\") pod \"dnsmasq-dns-5689975857-jdl2f\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " pod="openstack/dnsmasq-dns-5689975857-jdl2f" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.345914 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crxbh\" (UniqueName: \"kubernetes.io/projected/176656fb-70e6-424e-a054-8a3279e3a8fe-kube-api-access-crxbh\") pod \"dnsmasq-dns-5689975857-jdl2f\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " pod="openstack/dnsmasq-dns-5689975857-jdl2f" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.347232 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-config-data\") pod \"keystone-bootstrap-dfsrl\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " pod="openstack/keystone-bootstrap-dfsrl" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.357620 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-dns-swift-storage-0\") pod \"dnsmasq-dns-5689975857-jdl2f\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " pod="openstack/dnsmasq-dns-5689975857-jdl2f" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.360622 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-dns-svc\") pod \"dnsmasq-dns-5689975857-jdl2f\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " pod="openstack/dnsmasq-dns-5689975857-jdl2f" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.361345 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-ovsdbserver-sb\") pod \"dnsmasq-dns-5689975857-jdl2f\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " pod="openstack/dnsmasq-dns-5689975857-jdl2f" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.361636 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-config-data\") pod \"keystone-bootstrap-dfsrl\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " pod="openstack/keystone-bootstrap-dfsrl" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.361932 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-ovsdbserver-nb\") pod \"dnsmasq-dns-5689975857-jdl2f\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " pod="openstack/dnsmasq-dns-5689975857-jdl2f" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.363610 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-config\") pod \"dnsmasq-dns-5689975857-jdl2f\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " pod="openstack/dnsmasq-dns-5689975857-jdl2f" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.365208 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-fernet-keys\") pod \"keystone-bootstrap-dfsrl\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " pod="openstack/keystone-bootstrap-dfsrl" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.375788 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-scripts\") pod \"keystone-bootstrap-dfsrl\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " pod="openstack/keystone-bootstrap-dfsrl" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.384631 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-combined-ca-bundle\") pod \"keystone-bootstrap-dfsrl\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " pod="openstack/keystone-bootstrap-dfsrl" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.386916 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crxbh\" (UniqueName: \"kubernetes.io/projected/176656fb-70e6-424e-a054-8a3279e3a8fe-kube-api-access-crxbh\") pod \"dnsmasq-dns-5689975857-jdl2f\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " pod="openstack/dnsmasq-dns-5689975857-jdl2f" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.402578 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-credential-keys\") pod \"keystone-bootstrap-dfsrl\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " pod="openstack/keystone-bootstrap-dfsrl" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.408116 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnbg4\" (UniqueName: \"kubernetes.io/projected/cf200531-c073-4d03-916a-cb3f54c5aa89-kube-api-access-gnbg4\") pod \"keystone-bootstrap-dfsrl\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " pod="openstack/keystone-bootstrap-dfsrl" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.423520 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5689975857-jdl2f" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.451406 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/611e3579-a9f1-409e-9d3a-071a436916fd-scripts\") pod \"horizon-8549c6756f-mnmb6\" (UID: \"611e3579-a9f1-409e-9d3a-071a436916fd\") " pod="openstack/horizon-8549c6756f-mnmb6" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.451450 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/611e3579-a9f1-409e-9d3a-071a436916fd-logs\") pod \"horizon-8549c6756f-mnmb6\" (UID: \"611e3579-a9f1-409e-9d3a-071a436916fd\") " pod="openstack/horizon-8549c6756f-mnmb6" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.451505 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/611e3579-a9f1-409e-9d3a-071a436916fd-horizon-secret-key\") pod \"horizon-8549c6756f-mnmb6\" (UID: \"611e3579-a9f1-409e-9d3a-071a436916fd\") " pod="openstack/horizon-8549c6756f-mnmb6" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.451553 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pggfj\" (UniqueName: \"kubernetes.io/projected/611e3579-a9f1-409e-9d3a-071a436916fd-kube-api-access-pggfj\") pod \"horizon-8549c6756f-mnmb6\" (UID: \"611e3579-a9f1-409e-9d3a-071a436916fd\") " pod="openstack/horizon-8549c6756f-mnmb6" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.451589 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/611e3579-a9f1-409e-9d3a-071a436916fd-config-data\") pod \"horizon-8549c6756f-mnmb6\" (UID: \"611e3579-a9f1-409e-9d3a-071a436916fd\") " pod="openstack/horizon-8549c6756f-mnmb6" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.483812 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5c85cf4b9c-c294z"] Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.493983 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.494268 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c85cf4b9c-c294z" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.497024 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.501523 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c85cf4b9c-c294z"] Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.501839 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.502100 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.503710 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dfsrl" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.526124 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.527487 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.535706 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.536483 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.537137 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.546772 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-rlc4g" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.549199 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.554338 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/611e3579-a9f1-409e-9d3a-071a436916fd-config-data\") pod \"horizon-8549c6756f-mnmb6\" (UID: \"611e3579-a9f1-409e-9d3a-071a436916fd\") " pod="openstack/horizon-8549c6756f-mnmb6" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.554405 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1c11605-37fa-4897-9583-2244b3de20c1-logs\") pod \"horizon-5c85cf4b9c-c294z\" (UID: \"c1c11605-37fa-4897-9583-2244b3de20c1\") " pod="openstack/horizon-5c85cf4b9c-c294z" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.554427 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " pod="openstack/ceilometer-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.554447 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/611e3579-a9f1-409e-9d3a-071a436916fd-scripts\") pod \"horizon-8549c6756f-mnmb6\" (UID: \"611e3579-a9f1-409e-9d3a-071a436916fd\") " pod="openstack/horizon-8549c6756f-mnmb6" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.554473 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/611e3579-a9f1-409e-9d3a-071a436916fd-logs\") pod \"horizon-8549c6756f-mnmb6\" (UID: \"611e3579-a9f1-409e-9d3a-071a436916fd\") " pod="openstack/horizon-8549c6756f-mnmb6" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.554504 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1c11605-37fa-4897-9583-2244b3de20c1-config-data\") pod \"horizon-5c85cf4b9c-c294z\" (UID: \"c1c11605-37fa-4897-9583-2244b3de20c1\") " pod="openstack/horizon-5c85cf4b9c-c294z" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.554518 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-config-data\") pod \"ceilometer-0\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " pod="openstack/ceilometer-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.554538 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " pod="openstack/ceilometer-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.554557 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/611e3579-a9f1-409e-9d3a-071a436916fd-horizon-secret-key\") pod \"horizon-8549c6756f-mnmb6\" (UID: \"611e3579-a9f1-409e-9d3a-071a436916fd\") " pod="openstack/horizon-8549c6756f-mnmb6" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.554580 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01d9c19c-9758-4d61-8a0d-53868923bfea-run-httpd\") pod \"ceilometer-0\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " pod="openstack/ceilometer-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.554601 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-scripts\") pod \"ceilometer-0\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " pod="openstack/ceilometer-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.554621 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmd85\" (UniqueName: \"kubernetes.io/projected/01d9c19c-9758-4d61-8a0d-53868923bfea-kube-api-access-cmd85\") pod \"ceilometer-0\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " pod="openstack/ceilometer-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.554638 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pggfj\" (UniqueName: \"kubernetes.io/projected/611e3579-a9f1-409e-9d3a-071a436916fd-kube-api-access-pggfj\") pod \"horizon-8549c6756f-mnmb6\" (UID: \"611e3579-a9f1-409e-9d3a-071a436916fd\") " pod="openstack/horizon-8549c6756f-mnmb6" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.554660 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1c11605-37fa-4897-9583-2244b3de20c1-horizon-secret-key\") pod \"horizon-5c85cf4b9c-c294z\" (UID: \"c1c11605-37fa-4897-9583-2244b3de20c1\") " pod="openstack/horizon-5c85cf4b9c-c294z" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.554715 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7lxj\" (UniqueName: \"kubernetes.io/projected/c1c11605-37fa-4897-9583-2244b3de20c1-kube-api-access-c7lxj\") pod \"horizon-5c85cf4b9c-c294z\" (UID: \"c1c11605-37fa-4897-9583-2244b3de20c1\") " pod="openstack/horizon-5c85cf4b9c-c294z" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.554735 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1c11605-37fa-4897-9583-2244b3de20c1-scripts\") pod \"horizon-5c85cf4b9c-c294z\" (UID: \"c1c11605-37fa-4897-9583-2244b3de20c1\") " pod="openstack/horizon-5c85cf4b9c-c294z" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.554750 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01d9c19c-9758-4d61-8a0d-53868923bfea-log-httpd\") pod \"ceilometer-0\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " pod="openstack/ceilometer-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.561159 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/611e3579-a9f1-409e-9d3a-071a436916fd-config-data\") pod \"horizon-8549c6756f-mnmb6\" (UID: \"611e3579-a9f1-409e-9d3a-071a436916fd\") " pod="openstack/horizon-8549c6756f-mnmb6" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.561659 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/611e3579-a9f1-409e-9d3a-071a436916fd-scripts\") pod \"horizon-8549c6756f-mnmb6\" (UID: \"611e3579-a9f1-409e-9d3a-071a436916fd\") " pod="openstack/horizon-8549c6756f-mnmb6" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.561871 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/611e3579-a9f1-409e-9d3a-071a436916fd-logs\") pod \"horizon-8549c6756f-mnmb6\" (UID: \"611e3579-a9f1-409e-9d3a-071a436916fd\") " pod="openstack/horizon-8549c6756f-mnmb6" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.578784 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/611e3579-a9f1-409e-9d3a-071a436916fd-horizon-secret-key\") pod \"horizon-8549c6756f-mnmb6\" (UID: \"611e3579-a9f1-409e-9d3a-071a436916fd\") " pod="openstack/horizon-8549c6756f-mnmb6" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.592606 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.604798 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-2mbjb"] Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.605366 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pggfj\" (UniqueName: \"kubernetes.io/projected/611e3579-a9f1-409e-9d3a-071a436916fd-kube-api-access-pggfj\") pod \"horizon-8549c6756f-mnmb6\" (UID: \"611e3579-a9f1-409e-9d3a-071a436916fd\") " pod="openstack/horizon-8549c6756f-mnmb6" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.605922 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2mbjb" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.612598 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.614592 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-wn2rf" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.618584 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.623944 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5689975857-jdl2f"] Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.633771 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2mbjb"] Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.652556 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74fd8b655f-6pbcd"] Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.653828 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74fd8b655f-6pbcd"] Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.653903 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.654094 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8549c6756f-mnmb6" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661295 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661339 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-combined-ca-bundle\") pod \"placement-db-sync-2mbjb\" (UID: \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\") " pod="openstack/placement-db-sync-2mbjb" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661360 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-scripts\") pod \"placement-db-sync-2mbjb\" (UID: \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\") " pod="openstack/placement-db-sync-2mbjb" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661382 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1c11605-37fa-4897-9583-2244b3de20c1-logs\") pod \"horizon-5c85cf4b9c-c294z\" (UID: \"c1c11605-37fa-4897-9583-2244b3de20c1\") " pod="openstack/horizon-5c85cf4b9c-c294z" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661396 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " pod="openstack/ceilometer-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661417 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661453 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661470 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-config-data\") pod \"placement-db-sync-2mbjb\" (UID: \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\") " pod="openstack/placement-db-sync-2mbjb" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661491 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1c11605-37fa-4897-9583-2244b3de20c1-config-data\") pod \"horizon-5c85cf4b9c-c294z\" (UID: \"c1c11605-37fa-4897-9583-2244b3de20c1\") " pod="openstack/horizon-5c85cf4b9c-c294z" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661506 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-config-data\") pod \"ceilometer-0\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " pod="openstack/ceilometer-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661526 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2c80cf99-cad2-41a5-bda0-eb1c8334da35-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661542 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " pod="openstack/ceilometer-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661570 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-logs\") pod \"placement-db-sync-2mbjb\" (UID: \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\") " pod="openstack/placement-db-sync-2mbjb" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661585 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01d9c19c-9758-4d61-8a0d-53868923bfea-run-httpd\") pod \"ceilometer-0\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " pod="openstack/ceilometer-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661604 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-scripts\") pod \"ceilometer-0\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " pod="openstack/ceilometer-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661620 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsvq5\" (UniqueName: \"kubernetes.io/projected/2c80cf99-cad2-41a5-bda0-eb1c8334da35-kube-api-access-zsvq5\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661639 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-config-data\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661656 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmd85\" (UniqueName: \"kubernetes.io/projected/01d9c19c-9758-4d61-8a0d-53868923bfea-kube-api-access-cmd85\") pod \"ceilometer-0\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " pod="openstack/ceilometer-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661673 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-scripts\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661706 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1c11605-37fa-4897-9583-2244b3de20c1-horizon-secret-key\") pod \"horizon-5c85cf4b9c-c294z\" (UID: \"c1c11605-37fa-4897-9583-2244b3de20c1\") " pod="openstack/horizon-5c85cf4b9c-c294z" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661725 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7lxj\" (UniqueName: \"kubernetes.io/projected/c1c11605-37fa-4897-9583-2244b3de20c1-kube-api-access-c7lxj\") pod \"horizon-5c85cf4b9c-c294z\" (UID: \"c1c11605-37fa-4897-9583-2244b3de20c1\") " pod="openstack/horizon-5c85cf4b9c-c294z" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661739 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c80cf99-cad2-41a5-bda0-eb1c8334da35-logs\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661753 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qssjj\" (UniqueName: \"kubernetes.io/projected/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-kube-api-access-qssjj\") pod \"placement-db-sync-2mbjb\" (UID: \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\") " pod="openstack/placement-db-sync-2mbjb" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661771 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1c11605-37fa-4897-9583-2244b3de20c1-scripts\") pod \"horizon-5c85cf4b9c-c294z\" (UID: \"c1c11605-37fa-4897-9583-2244b3de20c1\") " pod="openstack/horizon-5c85cf4b9c-c294z" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.661789 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01d9c19c-9758-4d61-8a0d-53868923bfea-log-httpd\") pod \"ceilometer-0\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " pod="openstack/ceilometer-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.662072 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01d9c19c-9758-4d61-8a0d-53868923bfea-log-httpd\") pod \"ceilometer-0\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " pod="openstack/ceilometer-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.662495 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1c11605-37fa-4897-9583-2244b3de20c1-logs\") pod \"horizon-5c85cf4b9c-c294z\" (UID: \"c1c11605-37fa-4897-9583-2244b3de20c1\") " pod="openstack/horizon-5c85cf4b9c-c294z" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.662533 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01d9c19c-9758-4d61-8a0d-53868923bfea-run-httpd\") pod \"ceilometer-0\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " pod="openstack/ceilometer-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.664510 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1c11605-37fa-4897-9583-2244b3de20c1-config-data\") pod \"horizon-5c85cf4b9c-c294z\" (UID: \"c1c11605-37fa-4897-9583-2244b3de20c1\") " pod="openstack/horizon-5c85cf4b9c-c294z" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.665451 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " pod="openstack/ceilometer-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.665707 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1c11605-37fa-4897-9583-2244b3de20c1-scripts\") pod \"horizon-5c85cf4b9c-c294z\" (UID: \"c1c11605-37fa-4897-9583-2244b3de20c1\") " pod="openstack/horizon-5c85cf4b9c-c294z" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.676409 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1c11605-37fa-4897-9583-2244b3de20c1-horizon-secret-key\") pod \"horizon-5c85cf4b9c-c294z\" (UID: \"c1c11605-37fa-4897-9583-2244b3de20c1\") " pod="openstack/horizon-5c85cf4b9c-c294z" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.679358 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-config-data\") pod \"ceilometer-0\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " pod="openstack/ceilometer-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.684371 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " pod="openstack/ceilometer-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.685554 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.687160 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7lxj\" (UniqueName: \"kubernetes.io/projected/c1c11605-37fa-4897-9583-2244b3de20c1-kube-api-access-c7lxj\") pod \"horizon-5c85cf4b9c-c294z\" (UID: \"c1c11605-37fa-4897-9583-2244b3de20c1\") " pod="openstack/horizon-5c85cf4b9c-c294z" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.688195 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmd85\" (UniqueName: \"kubernetes.io/projected/01d9c19c-9758-4d61-8a0d-53868923bfea-kube-api-access-cmd85\") pod \"ceilometer-0\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " pod="openstack/ceilometer-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.692030 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.701046 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.701199 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.712295 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.735859 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-scripts\") pod \"ceilometer-0\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " pod="openstack/ceilometer-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.767877 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-logs\") pod \"placement-db-sync-2mbjb\" (UID: \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\") " pod="openstack/placement-db-sync-2mbjb" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.767914 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-dns-swift-storage-0\") pod \"dnsmasq-dns-74fd8b655f-6pbcd\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.767954 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsvq5\" (UniqueName: \"kubernetes.io/projected/2c80cf99-cad2-41a5-bda0-eb1c8334da35-kube-api-access-zsvq5\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.767977 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-config-data\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.767993 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-config-data\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.768013 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-scripts\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.768030 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8nv5\" (UniqueName: \"kubernetes.io/projected/370c8f58-d257-4e3f-a54a-d34231b6dfd5-kube-api-access-q8nv5\") pod \"dnsmasq-dns-74fd8b655f-6pbcd\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.768062 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c80cf99-cad2-41a5-bda0-eb1c8334da35-logs\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.768079 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qssjj\" (UniqueName: \"kubernetes.io/projected/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-kube-api-access-qssjj\") pod \"placement-db-sync-2mbjb\" (UID: \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\") " pod="openstack/placement-db-sync-2mbjb" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.768101 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.768120 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/684ee16c-6c42-46e9-b629-87cdff8b3076-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.768139 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-dns-svc\") pod \"dnsmasq-dns-74fd8b655f-6pbcd\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.768158 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.768182 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/684ee16c-6c42-46e9-b629-87cdff8b3076-logs\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.768219 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-combined-ca-bundle\") pod \"placement-db-sync-2mbjb\" (UID: \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\") " pod="openstack/placement-db-sync-2mbjb" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.768242 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-scripts\") pod \"placement-db-sync-2mbjb\" (UID: \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\") " pod="openstack/placement-db-sync-2mbjb" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.768264 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.768290 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-scripts\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.768310 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-ovsdbserver-nb\") pod \"dnsmasq-dns-74fd8b655f-6pbcd\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.768333 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.768365 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-config-data\") pod \"placement-db-sync-2mbjb\" (UID: \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\") " pod="openstack/placement-db-sync-2mbjb" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.768385 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.768399 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.768420 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-config\") pod \"dnsmasq-dns-74fd8b655f-6pbcd\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.768436 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2c80cf99-cad2-41a5-bda0-eb1c8334da35-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.768450 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcn6f\" (UniqueName: \"kubernetes.io/projected/684ee16c-6c42-46e9-b629-87cdff8b3076-kube-api-access-gcn6f\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.768479 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-ovsdbserver-sb\") pod \"dnsmasq-dns-74fd8b655f-6pbcd\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.768703 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-logs\") pod \"placement-db-sync-2mbjb\" (UID: \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\") " pod="openstack/placement-db-sync-2mbjb" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.769949 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.771472 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2c80cf99-cad2-41a5-bda0-eb1c8334da35-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.771979 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c80cf99-cad2-41a5-bda0-eb1c8334da35-logs\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.776235 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-config-data\") pod \"placement-db-sync-2mbjb\" (UID: \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\") " pod="openstack/placement-db-sync-2mbjb" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.776862 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-combined-ca-bundle\") pod \"placement-db-sync-2mbjb\" (UID: \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\") " pod="openstack/placement-db-sync-2mbjb" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.780283 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.784113 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-scripts\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.786262 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsvq5\" (UniqueName: \"kubernetes.io/projected/2c80cf99-cad2-41a5-bda0-eb1c8334da35-kube-api-access-zsvq5\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.789737 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-config-data\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.804154 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-scripts\") pod \"placement-db-sync-2mbjb\" (UID: \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\") " pod="openstack/placement-db-sync-2mbjb" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.804495 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qssjj\" (UniqueName: \"kubernetes.io/projected/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-kube-api-access-qssjj\") pod \"placement-db-sync-2mbjb\" (UID: \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\") " pod="openstack/placement-db-sync-2mbjb" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.807276 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.818865 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.869860 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/684ee16c-6c42-46e9-b629-87cdff8b3076-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.869912 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-dns-svc\") pod \"dnsmasq-dns-74fd8b655f-6pbcd\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.869954 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/684ee16c-6c42-46e9-b629-87cdff8b3076-logs\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.870065 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-scripts\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.870125 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-ovsdbserver-nb\") pod \"dnsmasq-dns-74fd8b655f-6pbcd\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.870194 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.870212 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.870239 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-config\") pod \"dnsmasq-dns-74fd8b655f-6pbcd\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.870277 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcn6f\" (UniqueName: \"kubernetes.io/projected/684ee16c-6c42-46e9-b629-87cdff8b3076-kube-api-access-gcn6f\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.870307 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/684ee16c-6c42-46e9-b629-87cdff8b3076-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.870311 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-ovsdbserver-sb\") pod \"dnsmasq-dns-74fd8b655f-6pbcd\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.870388 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-dns-swift-storage-0\") pod \"dnsmasq-dns-74fd8b655f-6pbcd\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.870455 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-config-data\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.870497 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8nv5\" (UniqueName: \"kubernetes.io/projected/370c8f58-d257-4e3f-a54a-d34231b6dfd5-kube-api-access-q8nv5\") pod \"dnsmasq-dns-74fd8b655f-6pbcd\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.870560 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.871752 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.871788 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-dns-swift-storage-0\") pod \"dnsmasq-dns-74fd8b655f-6pbcd\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.871874 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/684ee16c-6c42-46e9-b629-87cdff8b3076-logs\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.872128 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-ovsdbserver-nb\") pod \"dnsmasq-dns-74fd8b655f-6pbcd\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.872568 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-dns-svc\") pod \"dnsmasq-dns-74fd8b655f-6pbcd\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.873109 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-config\") pod \"dnsmasq-dns-74fd8b655f-6pbcd\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.873304 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-ovsdbserver-sb\") pod \"dnsmasq-dns-74fd8b655f-6pbcd\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.875277 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.881945 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-scripts\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.885637 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-config-data\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.886465 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8nv5\" (UniqueName: \"kubernetes.io/projected/370c8f58-d257-4e3f-a54a-d34231b6dfd5-kube-api-access-q8nv5\") pod \"dnsmasq-dns-74fd8b655f-6pbcd\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.894834 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c85cf4b9c-c294z" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.897153 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcn6f\" (UniqueName: \"kubernetes.io/projected/684ee16c-6c42-46e9-b629-87cdff8b3076-kube-api-access-gcn6f\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.915609 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.915893 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.931936 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.936430 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.954185 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2mbjb" Dec 05 10:41:20 crc kubenswrapper[4796]: I1205 10:41:20.989523 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.013244 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ddql" event={"ID":"4c607361-8750-4197-ab5d-071d2d63ba1f","Type":"ContainerStarted","Data":"be778e2ee71b23f02bfc4be6165a7a41c838d9732dd68b01738ea3d2842e410a"} Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.031124 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.128433 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5689975857-jdl2f"] Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.143049 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dfsrl"] Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.292038 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8549c6756f-mnmb6"] Dec 05 10:41:21 crc kubenswrapper[4796]: W1205 10:41:21.310591 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod611e3579_a9f1_409e_9d3a_071a436916fd.slice/crio-fc4105c76a74f969f58bfe8496f1e923564f7bcfd6795802cdb093702f7221d0 WatchSource:0}: Error finding container fc4105c76a74f969f58bfe8496f1e923564f7bcfd6795802cdb093702f7221d0: Status 404 returned error can't find the container with id fc4105c76a74f969f58bfe8496f1e923564f7bcfd6795802cdb093702f7221d0 Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.314888 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c85cf4b9c-c294z"] Dec 05 10:41:21 crc kubenswrapper[4796]: W1205 10:41:21.315341 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1c11605_37fa_4897_9583_2244b3de20c1.slice/crio-fbb457bb1e2d4b7003927255fd2e253b00c6352068dfdea91c512b89bc11069b WatchSource:0}: Error finding container fbb457bb1e2d4b7003927255fd2e253b00c6352068dfdea91c512b89bc11069b: Status 404 returned error can't find the container with id fbb457bb1e2d4b7003927255fd2e253b00c6352068dfdea91c512b89bc11069b Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.541895 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6b1d-account-create-h2cct"] Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.542880 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6b1d-account-create-h2cct" Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.545509 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.546182 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2mbjb"] Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.561898 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6b1d-account-create-h2cct"] Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.569447 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.586751 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8dkn\" (UniqueName: \"kubernetes.io/projected/ddc03506-8f42-469e-8315-b0bfe3b4c2be-kube-api-access-j8dkn\") pod \"cinder-6b1d-account-create-h2cct\" (UID: \"ddc03506-8f42-469e-8315-b0bfe3b4c2be\") " pod="openstack/cinder-6b1d-account-create-h2cct" Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.639064 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-e210-account-create-qkfmm"] Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.640024 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e210-account-create-qkfmm" Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.641367 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.662608 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-e210-account-create-qkfmm"] Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.680607 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74fd8b655f-6pbcd"] Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.687960 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8dkn\" (UniqueName: \"kubernetes.io/projected/ddc03506-8f42-469e-8315-b0bfe3b4c2be-kube-api-access-j8dkn\") pod \"cinder-6b1d-account-create-h2cct\" (UID: \"ddc03506-8f42-469e-8315-b0bfe3b4c2be\") " pod="openstack/cinder-6b1d-account-create-h2cct" Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.688080 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zndmb\" (UniqueName: \"kubernetes.io/projected/ad5ee03d-3cd9-4633-bdf2-00942ae22258-kube-api-access-zndmb\") pod \"barbican-e210-account-create-qkfmm\" (UID: \"ad5ee03d-3cd9-4633-bdf2-00942ae22258\") " pod="openstack/barbican-e210-account-create-qkfmm" Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.701989 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8dkn\" (UniqueName: \"kubernetes.io/projected/ddc03506-8f42-469e-8315-b0bfe3b4c2be-kube-api-access-j8dkn\") pod \"cinder-6b1d-account-create-h2cct\" (UID: \"ddc03506-8f42-469e-8315-b0bfe3b4c2be\") " pod="openstack/cinder-6b1d-account-create-h2cct" Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.725761 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.789069 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zndmb\" (UniqueName: \"kubernetes.io/projected/ad5ee03d-3cd9-4633-bdf2-00942ae22258-kube-api-access-zndmb\") pod \"barbican-e210-account-create-qkfmm\" (UID: \"ad5ee03d-3cd9-4633-bdf2-00942ae22258\") " pod="openstack/barbican-e210-account-create-qkfmm" Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.803404 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zndmb\" (UniqueName: \"kubernetes.io/projected/ad5ee03d-3cd9-4633-bdf2-00942ae22258-kube-api-access-zndmb\") pod \"barbican-e210-account-create-qkfmm\" (UID: \"ad5ee03d-3cd9-4633-bdf2-00942ae22258\") " pod="openstack/barbican-e210-account-create-qkfmm" Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.816465 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 10:41:21 crc kubenswrapper[4796]: W1205 10:41:21.819122 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod684ee16c_6c42_46e9_b629_87cdff8b3076.slice/crio-7ca51cc3610c38dd2d0d3587378ccbd92143bcca94fe195698d9e822604e1068 WatchSource:0}: Error finding container 7ca51cc3610c38dd2d0d3587378ccbd92143bcca94fe195698d9e822604e1068: Status 404 returned error can't find the container with id 7ca51cc3610c38dd2d0d3587378ccbd92143bcca94fe195698d9e822604e1068 Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.857990 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6b1d-account-create-h2cct" Dec 05 10:41:21 crc kubenswrapper[4796]: I1205 10:41:21.962834 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e210-account-create-qkfmm" Dec 05 10:41:22 crc kubenswrapper[4796]: I1205 10:41:22.066366 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01d9c19c-9758-4d61-8a0d-53868923bfea","Type":"ContainerStarted","Data":"c0239fc9ce0f7551603c601eca797ef9632f686290f0a2bd991399c7faa1d725"} Dec 05 10:41:22 crc kubenswrapper[4796]: I1205 10:41:22.076007 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2c80cf99-cad2-41a5-bda0-eb1c8334da35","Type":"ContainerStarted","Data":"186e377270a0b5ddaf9e7496801b641ac1e15e876d417b55251f4a2d24523ac9"} Dec 05 10:41:22 crc kubenswrapper[4796]: I1205 10:41:22.076921 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"684ee16c-6c42-46e9-b629-87cdff8b3076","Type":"ContainerStarted","Data":"7ca51cc3610c38dd2d0d3587378ccbd92143bcca94fe195698d9e822604e1068"} Dec 05 10:41:22 crc kubenswrapper[4796]: I1205 10:41:22.077744 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dfsrl" event={"ID":"cf200531-c073-4d03-916a-cb3f54c5aa89","Type":"ContainerStarted","Data":"455a0ba9c3714e2b07b0804e6a32ce495e894ab27022c9824103b9e53b753e13"} Dec 05 10:41:22 crc kubenswrapper[4796]: I1205 10:41:22.078907 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c85cf4b9c-c294z" event={"ID":"c1c11605-37fa-4897-9583-2244b3de20c1","Type":"ContainerStarted","Data":"fbb457bb1e2d4b7003927255fd2e253b00c6352068dfdea91c512b89bc11069b"} Dec 05 10:41:22 crc kubenswrapper[4796]: I1205 10:41:22.083425 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8549c6756f-mnmb6" event={"ID":"611e3579-a9f1-409e-9d3a-071a436916fd","Type":"ContainerStarted","Data":"fc4105c76a74f969f58bfe8496f1e923564f7bcfd6795802cdb093702f7221d0"} Dec 05 10:41:22 crc kubenswrapper[4796]: I1205 10:41:22.087288 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2mbjb" event={"ID":"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6","Type":"ContainerStarted","Data":"60ae0e444b3a3cc360833bb5378aa6878409597a356cd48bbdbef86f966f1372"} Dec 05 10:41:22 crc kubenswrapper[4796]: I1205 10:41:22.088710 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" event={"ID":"370c8f58-d257-4e3f-a54a-d34231b6dfd5","Type":"ContainerStarted","Data":"7638046189bbf1c1afbdb1b880e871e2a9fc6e030ecf89d5812b6703920d655b"} Dec 05 10:41:22 crc kubenswrapper[4796]: I1205 10:41:22.089902 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5689975857-jdl2f" event={"ID":"176656fb-70e6-424e-a054-8a3279e3a8fe","Type":"ContainerStarted","Data":"1bcaaa549f83a010c5cada0768ed9da0b95163bf22ba0c27455ae3a1299fad13"} Dec 05 10:41:22 crc kubenswrapper[4796]: I1205 10:41:22.104553 4796 generic.go:334] "Generic (PLEG): container finished" podID="4c607361-8750-4197-ab5d-071d2d63ba1f" containerID="be778e2ee71b23f02bfc4be6165a7a41c838d9732dd68b01738ea3d2842e410a" exitCode=0 Dec 05 10:41:22 crc kubenswrapper[4796]: I1205 10:41:22.104735 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d879466b9-hwd7g" podUID="2f8de231-4c1d-4895-9626-5e639b82e02e" containerName="dnsmasq-dns" containerID="cri-o://065c165a6b894700c5d5cdbb16945507d3068b2e69d8298c69dda717e830bf13" gracePeriod=10 Dec 05 10:41:22 crc kubenswrapper[4796]: I1205 10:41:22.105768 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ddql" event={"ID":"4c607361-8750-4197-ab5d-071d2d63ba1f","Type":"ContainerDied","Data":"be778e2ee71b23f02bfc4be6165a7a41c838d9732dd68b01738ea3d2842e410a"} Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:22.270346 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6b1d-account-create-h2cct"] Dec 05 10:41:23 crc kubenswrapper[4796]: W1205 10:41:22.301442 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddc03506_8f42_469e_8315_b0bfe3b4c2be.slice/crio-e09215b0bcbe8579bf3c4185673f10703a4fbf409647953153f058bb449f0c6d WatchSource:0}: Error finding container e09215b0bcbe8579bf3c4185673f10703a4fbf409647953153f058bb449f0c6d: Status 404 returned error can't find the container with id e09215b0bcbe8579bf3c4185673f10703a4fbf409647953153f058bb449f0c6d Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:22.365802 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-e210-account-create-qkfmm"] Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.141010 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ddql" event={"ID":"4c607361-8750-4197-ab5d-071d2d63ba1f","Type":"ContainerStarted","Data":"216bcca1dfa13bc88515e966c688b3f57739a7d50783f5f2fb1d1553397e8112"} Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.143253 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dfsrl" event={"ID":"cf200531-c073-4d03-916a-cb3f54c5aa89","Type":"ContainerStarted","Data":"e1b8948c49705b6c44a7145ef1a9315332d8d905e50ecd667137804c46528764"} Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.146336 4796 generic.go:334] "Generic (PLEG): container finished" podID="2f8de231-4c1d-4895-9626-5e639b82e02e" containerID="065c165a6b894700c5d5cdbb16945507d3068b2e69d8298c69dda717e830bf13" exitCode=0 Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.146386 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d879466b9-hwd7g" event={"ID":"2f8de231-4c1d-4895-9626-5e639b82e02e","Type":"ContainerDied","Data":"065c165a6b894700c5d5cdbb16945507d3068b2e69d8298c69dda717e830bf13"} Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.149888 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"684ee16c-6c42-46e9-b629-87cdff8b3076","Type":"ContainerStarted","Data":"a100e0dbeb0419045eeedd96547edb2b013ab8cab5835652b907232f44cfd5c9"} Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.152430 4796 generic.go:334] "Generic (PLEG): container finished" podID="ad5ee03d-3cd9-4633-bdf2-00942ae22258" containerID="6713b35bf82ae898e3121e1871ea97afbef3b1ec6032f0dafda895af54e87dee" exitCode=0 Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.152471 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e210-account-create-qkfmm" event={"ID":"ad5ee03d-3cd9-4633-bdf2-00942ae22258","Type":"ContainerDied","Data":"6713b35bf82ae898e3121e1871ea97afbef3b1ec6032f0dafda895af54e87dee"} Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.152486 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e210-account-create-qkfmm" event={"ID":"ad5ee03d-3cd9-4633-bdf2-00942ae22258","Type":"ContainerStarted","Data":"a1f43ca864d2b7a7d37f87be0d888204a170d00af0020645c555cda4eb18845a"} Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.153901 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2c80cf99-cad2-41a5-bda0-eb1c8334da35","Type":"ContainerStarted","Data":"9ec45a366dc039a2ea53db3ec81cd25b1c6d4872d3a7f535a48adf852311f188"} Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.160763 4796 generic.go:334] "Generic (PLEG): container finished" podID="176656fb-70e6-424e-a054-8a3279e3a8fe" containerID="0aec0f32e163220759098fa21919d48c178ae69e82c007e0759e7e61f8d439e1" exitCode=0 Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.160802 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5689975857-jdl2f" event={"ID":"176656fb-70e6-424e-a054-8a3279e3a8fe","Type":"ContainerDied","Data":"0aec0f32e163220759098fa21919d48c178ae69e82c007e0759e7e61f8d439e1"} Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.161921 4796 generic.go:334] "Generic (PLEG): container finished" podID="ddc03506-8f42-469e-8315-b0bfe3b4c2be" containerID="62dc96a793449894118ca39b5b880f3b8596f9fded97738e8edf130d36a2e390" exitCode=0 Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.161956 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6b1d-account-create-h2cct" event={"ID":"ddc03506-8f42-469e-8315-b0bfe3b4c2be","Type":"ContainerDied","Data":"62dc96a793449894118ca39b5b880f3b8596f9fded97738e8edf130d36a2e390"} Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.161970 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6b1d-account-create-h2cct" event={"ID":"ddc03506-8f42-469e-8315-b0bfe3b4c2be","Type":"ContainerStarted","Data":"e09215b0bcbe8579bf3c4185673f10703a4fbf409647953153f058bb449f0c6d"} Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.163200 4796 generic.go:334] "Generic (PLEG): container finished" podID="370c8f58-d257-4e3f-a54a-d34231b6dfd5" containerID="46af83226b845395767247cdc2028373b44feee84b15c1422749bdd4bfd826b7" exitCode=0 Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.163226 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" event={"ID":"370c8f58-d257-4e3f-a54a-d34231b6dfd5","Type":"ContainerDied","Data":"46af83226b845395767247cdc2028373b44feee84b15c1422749bdd4bfd826b7"} Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.175825 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7ddql" podStartSLOduration=2.526301602 podStartE2EDuration="5.175812315s" podCreationTimestamp="2025-12-05 10:41:18 +0000 UTC" firstStartedPulling="2025-12-05 10:41:19.943334146 +0000 UTC m=+826.231439660" lastFinishedPulling="2025-12-05 10:41:22.59284486 +0000 UTC m=+828.880950373" observedRunningTime="2025-12-05 10:41:23.159837291 +0000 UTC m=+829.447942804" watchObservedRunningTime="2025-12-05 10:41:23.175812315 +0000 UTC m=+829.463917818" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.230801 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dfsrl" podStartSLOduration=3.230787894 podStartE2EDuration="3.230787894s" podCreationTimestamp="2025-12-05 10:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:41:23.225442548 +0000 UTC m=+829.513548060" watchObservedRunningTime="2025-12-05 10:41:23.230787894 +0000 UTC m=+829.518893407" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.234496 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.234484407 podStartE2EDuration="3.234484407s" podCreationTimestamp="2025-12-05 10:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:41:23.20679791 +0000 UTC m=+829.494903422" watchObservedRunningTime="2025-12-05 10:41:23.234484407 +0000 UTC m=+829.522589921" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.316825 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d879466b9-hwd7g" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.329280 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-dns-swift-storage-0\") pod \"2f8de231-4c1d-4895-9626-5e639b82e02e\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.329381 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-ovsdbserver-nb\") pod \"2f8de231-4c1d-4895-9626-5e639b82e02e\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.329415 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-ovsdbserver-sb\") pod \"2f8de231-4c1d-4895-9626-5e639b82e02e\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.329460 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-dns-svc\") pod \"2f8de231-4c1d-4895-9626-5e639b82e02e\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.329482 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-config\") pod \"2f8de231-4c1d-4895-9626-5e639b82e02e\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.329519 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vhnn\" (UniqueName: \"kubernetes.io/projected/2f8de231-4c1d-4895-9626-5e639b82e02e-kube-api-access-8vhnn\") pod \"2f8de231-4c1d-4895-9626-5e639b82e02e\" (UID: \"2f8de231-4c1d-4895-9626-5e639b82e02e\") " Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.333901 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f8de231-4c1d-4895-9626-5e639b82e02e-kube-api-access-8vhnn" (OuterVolumeSpecName: "kube-api-access-8vhnn") pod "2f8de231-4c1d-4895-9626-5e639b82e02e" (UID: "2f8de231-4c1d-4895-9626-5e639b82e02e"). InnerVolumeSpecName "kube-api-access-8vhnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.369530 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2f8de231-4c1d-4895-9626-5e639b82e02e" (UID: "2f8de231-4c1d-4895-9626-5e639b82e02e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.448771 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.448792 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vhnn\" (UniqueName: \"kubernetes.io/projected/2f8de231-4c1d-4895-9626-5e639b82e02e-kube-api-access-8vhnn\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.449489 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2f8de231-4c1d-4895-9626-5e639b82e02e" (UID: "2f8de231-4c1d-4895-9626-5e639b82e02e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.461929 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2f8de231-4c1d-4895-9626-5e639b82e02e" (UID: "2f8de231-4c1d-4895-9626-5e639b82e02e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.468177 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-config" (OuterVolumeSpecName: "config") pod "2f8de231-4c1d-4895-9626-5e639b82e02e" (UID: "2f8de231-4c1d-4895-9626-5e639b82e02e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.532969 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2f8de231-4c1d-4895-9626-5e639b82e02e" (UID: "2f8de231-4c1d-4895-9626-5e639b82e02e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.552022 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.552054 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.552067 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.552077 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f8de231-4c1d-4895-9626-5e639b82e02e-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.565561 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5689975857-jdl2f" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.653314 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crxbh\" (UniqueName: \"kubernetes.io/projected/176656fb-70e6-424e-a054-8a3279e3a8fe-kube-api-access-crxbh\") pod \"176656fb-70e6-424e-a054-8a3279e3a8fe\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.653354 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-dns-svc\") pod \"176656fb-70e6-424e-a054-8a3279e3a8fe\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.653375 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-ovsdbserver-sb\") pod \"176656fb-70e6-424e-a054-8a3279e3a8fe\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.653417 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-ovsdbserver-nb\") pod \"176656fb-70e6-424e-a054-8a3279e3a8fe\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.653446 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-dns-swift-storage-0\") pod \"176656fb-70e6-424e-a054-8a3279e3a8fe\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.660935 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/176656fb-70e6-424e-a054-8a3279e3a8fe-kube-api-access-crxbh" (OuterVolumeSpecName: "kube-api-access-crxbh") pod "176656fb-70e6-424e-a054-8a3279e3a8fe" (UID: "176656fb-70e6-424e-a054-8a3279e3a8fe"). InnerVolumeSpecName "kube-api-access-crxbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.676298 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "176656fb-70e6-424e-a054-8a3279e3a8fe" (UID: "176656fb-70e6-424e-a054-8a3279e3a8fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.676892 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "176656fb-70e6-424e-a054-8a3279e3a8fe" (UID: "176656fb-70e6-424e-a054-8a3279e3a8fe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.677762 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "176656fb-70e6-424e-a054-8a3279e3a8fe" (UID: "176656fb-70e6-424e-a054-8a3279e3a8fe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.681056 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "176656fb-70e6-424e-a054-8a3279e3a8fe" (UID: "176656fb-70e6-424e-a054-8a3279e3a8fe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.754784 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-config\") pod \"176656fb-70e6-424e-a054-8a3279e3a8fe\" (UID: \"176656fb-70e6-424e-a054-8a3279e3a8fe\") " Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.756271 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crxbh\" (UniqueName: \"kubernetes.io/projected/176656fb-70e6-424e-a054-8a3279e3a8fe-kube-api-access-crxbh\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.756313 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.756326 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.756335 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.756344 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.788344 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-config" (OuterVolumeSpecName: "config") pod "176656fb-70e6-424e-a054-8a3279e3a8fe" (UID: "176656fb-70e6-424e-a054-8a3279e3a8fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:23 crc kubenswrapper[4796]: I1205 10:41:23.857965 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/176656fb-70e6-424e-a054-8a3279e3a8fe-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.173076 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5689975857-jdl2f" event={"ID":"176656fb-70e6-424e-a054-8a3279e3a8fe","Type":"ContainerDied","Data":"1bcaaa549f83a010c5cada0768ed9da0b95163bf22ba0c27455ae3a1299fad13"} Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.173108 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5689975857-jdl2f" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.173140 4796 scope.go:117] "RemoveContainer" containerID="0aec0f32e163220759098fa21919d48c178ae69e82c007e0759e7e61f8d439e1" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.179916 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d879466b9-hwd7g" event={"ID":"2f8de231-4c1d-4895-9626-5e639b82e02e","Type":"ContainerDied","Data":"0322614546ecefafe1079408faa9cf1e3ef961e0f16d2804cc9d2332d93f07b0"} Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.179941 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d879466b9-hwd7g" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.186561 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"684ee16c-6c42-46e9-b629-87cdff8b3076","Type":"ContainerStarted","Data":"ba66d9f478790115d2350defabdf10cc4573891262fb46a84feb943043315e92"} Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.189596 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" event={"ID":"370c8f58-d257-4e3f-a54a-d34231b6dfd5","Type":"ContainerStarted","Data":"1e3543bd0ac25f2ce11f78c723f86ec480f8eb8ac419fd5b4a86beaf9da6ed74"} Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.189649 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.195550 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2c80cf99-cad2-41a5-bda0-eb1c8334da35","Type":"ContainerStarted","Data":"cec4b014b607f67fddf128a8594ae9c1a16386d01bb7b83b345dffd8237007ba"} Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.230468 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5689975857-jdl2f"] Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.233059 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5689975857-jdl2f"] Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.251755 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.251728709 podStartE2EDuration="4.251728709s" podCreationTimestamp="2025-12-05 10:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:41:24.249118628 +0000 UTC m=+830.537224142" watchObservedRunningTime="2025-12-05 10:41:24.251728709 +0000 UTC m=+830.539834222" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.273282 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" podStartSLOduration=4.273268022 podStartE2EDuration="4.273268022s" podCreationTimestamp="2025-12-05 10:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:41:24.27173104 +0000 UTC m=+830.559836554" watchObservedRunningTime="2025-12-05 10:41:24.273268022 +0000 UTC m=+830.561373535" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.297979 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d879466b9-hwd7g"] Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.308090 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d879466b9-hwd7g"] Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.554849 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.582917 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c85cf4b9c-c294z"] Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.598830 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-75f565c8bf-4qfx5"] Dec 05 10:41:24 crc kubenswrapper[4796]: E1205 10:41:24.599225 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176656fb-70e6-424e-a054-8a3279e3a8fe" containerName="init" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.599245 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="176656fb-70e6-424e-a054-8a3279e3a8fe" containerName="init" Dec 05 10:41:24 crc kubenswrapper[4796]: E1205 10:41:24.599280 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8de231-4c1d-4895-9626-5e639b82e02e" containerName="dnsmasq-dns" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.599289 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8de231-4c1d-4895-9626-5e639b82e02e" containerName="dnsmasq-dns" Dec 05 10:41:24 crc kubenswrapper[4796]: E1205 10:41:24.599303 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f8de231-4c1d-4895-9626-5e639b82e02e" containerName="init" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.599308 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f8de231-4c1d-4895-9626-5e639b82e02e" containerName="init" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.599474 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f8de231-4c1d-4895-9626-5e639b82e02e" containerName="dnsmasq-dns" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.599491 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="176656fb-70e6-424e-a054-8a3279e3a8fe" containerName="init" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.600517 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75f565c8bf-4qfx5" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.618886 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75f565c8bf-4qfx5"] Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.625157 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.643039 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.678770 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9dd19b1-8fb3-439c-80e1-126c13ca90da-config-data\") pod \"horizon-75f565c8bf-4qfx5\" (UID: \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\") " pod="openstack/horizon-75f565c8bf-4qfx5" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.678986 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e9dd19b1-8fb3-439c-80e1-126c13ca90da-horizon-secret-key\") pod \"horizon-75f565c8bf-4qfx5\" (UID: \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\") " pod="openstack/horizon-75f565c8bf-4qfx5" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.679138 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9dd19b1-8fb3-439c-80e1-126c13ca90da-logs\") pod \"horizon-75f565c8bf-4qfx5\" (UID: \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\") " pod="openstack/horizon-75f565c8bf-4qfx5" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.679361 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9dd19b1-8fb3-439c-80e1-126c13ca90da-scripts\") pod \"horizon-75f565c8bf-4qfx5\" (UID: \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\") " pod="openstack/horizon-75f565c8bf-4qfx5" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.679436 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjnlx\" (UniqueName: \"kubernetes.io/projected/e9dd19b1-8fb3-439c-80e1-126c13ca90da-kube-api-access-fjnlx\") pod \"horizon-75f565c8bf-4qfx5\" (UID: \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\") " pod="openstack/horizon-75f565c8bf-4qfx5" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.780916 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9dd19b1-8fb3-439c-80e1-126c13ca90da-logs\") pod \"horizon-75f565c8bf-4qfx5\" (UID: \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\") " pod="openstack/horizon-75f565c8bf-4qfx5" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.781096 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9dd19b1-8fb3-439c-80e1-126c13ca90da-scripts\") pod \"horizon-75f565c8bf-4qfx5\" (UID: \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\") " pod="openstack/horizon-75f565c8bf-4qfx5" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.781148 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjnlx\" (UniqueName: \"kubernetes.io/projected/e9dd19b1-8fb3-439c-80e1-126c13ca90da-kube-api-access-fjnlx\") pod \"horizon-75f565c8bf-4qfx5\" (UID: \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\") " pod="openstack/horizon-75f565c8bf-4qfx5" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.781208 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9dd19b1-8fb3-439c-80e1-126c13ca90da-config-data\") pod \"horizon-75f565c8bf-4qfx5\" (UID: \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\") " pod="openstack/horizon-75f565c8bf-4qfx5" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.781245 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e9dd19b1-8fb3-439c-80e1-126c13ca90da-horizon-secret-key\") pod \"horizon-75f565c8bf-4qfx5\" (UID: \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\") " pod="openstack/horizon-75f565c8bf-4qfx5" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.781415 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9dd19b1-8fb3-439c-80e1-126c13ca90da-logs\") pod \"horizon-75f565c8bf-4qfx5\" (UID: \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\") " pod="openstack/horizon-75f565c8bf-4qfx5" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.782548 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9dd19b1-8fb3-439c-80e1-126c13ca90da-scripts\") pod \"horizon-75f565c8bf-4qfx5\" (UID: \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\") " pod="openstack/horizon-75f565c8bf-4qfx5" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.783142 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9dd19b1-8fb3-439c-80e1-126c13ca90da-config-data\") pod \"horizon-75f565c8bf-4qfx5\" (UID: \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\") " pod="openstack/horizon-75f565c8bf-4qfx5" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.790178 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e9dd19b1-8fb3-439c-80e1-126c13ca90da-horizon-secret-key\") pod \"horizon-75f565c8bf-4qfx5\" (UID: \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\") " pod="openstack/horizon-75f565c8bf-4qfx5" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.806136 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjnlx\" (UniqueName: \"kubernetes.io/projected/e9dd19b1-8fb3-439c-80e1-126c13ca90da-kube-api-access-fjnlx\") pod \"horizon-75f565c8bf-4qfx5\" (UID: \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\") " pod="openstack/horizon-75f565c8bf-4qfx5" Dec 05 10:41:24 crc kubenswrapper[4796]: I1205 10:41:24.919292 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75f565c8bf-4qfx5" Dec 05 10:41:26 crc kubenswrapper[4796]: I1205 10:41:26.046701 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="176656fb-70e6-424e-a054-8a3279e3a8fe" path="/var/lib/kubelet/pods/176656fb-70e6-424e-a054-8a3279e3a8fe/volumes" Dec 05 10:41:26 crc kubenswrapper[4796]: I1205 10:41:26.047589 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f8de231-4c1d-4895-9626-5e639b82e02e" path="/var/lib/kubelet/pods/2f8de231-4c1d-4895-9626-5e639b82e02e/volumes" Dec 05 10:41:26 crc kubenswrapper[4796]: I1205 10:41:26.211008 4796 generic.go:334] "Generic (PLEG): container finished" podID="cf200531-c073-4d03-916a-cb3f54c5aa89" containerID="e1b8948c49705b6c44a7145ef1a9315332d8d905e50ecd667137804c46528764" exitCode=0 Dec 05 10:41:26 crc kubenswrapper[4796]: I1205 10:41:26.211075 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dfsrl" event={"ID":"cf200531-c073-4d03-916a-cb3f54c5aa89","Type":"ContainerDied","Data":"e1b8948c49705b6c44a7145ef1a9315332d8d905e50ecd667137804c46528764"} Dec 05 10:41:26 crc kubenswrapper[4796]: I1205 10:41:26.211310 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="684ee16c-6c42-46e9-b629-87cdff8b3076" containerName="glance-log" containerID="cri-o://a100e0dbeb0419045eeedd96547edb2b013ab8cab5835652b907232f44cfd5c9" gracePeriod=30 Dec 05 10:41:26 crc kubenswrapper[4796]: I1205 10:41:26.211383 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="684ee16c-6c42-46e9-b629-87cdff8b3076" containerName="glance-httpd" containerID="cri-o://ba66d9f478790115d2350defabdf10cc4573891262fb46a84feb943043315e92" gracePeriod=30 Dec 05 10:41:26 crc kubenswrapper[4796]: I1205 10:41:26.211479 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2c80cf99-cad2-41a5-bda0-eb1c8334da35" containerName="glance-log" containerID="cri-o://9ec45a366dc039a2ea53db3ec81cd25b1c6d4872d3a7f535a48adf852311f188" gracePeriod=30 Dec 05 10:41:26 crc kubenswrapper[4796]: I1205 10:41:26.211510 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2c80cf99-cad2-41a5-bda0-eb1c8334da35" containerName="glance-httpd" containerID="cri-o://cec4b014b607f67fddf128a8594ae9c1a16386d01bb7b83b345dffd8237007ba" gracePeriod=30 Dec 05 10:41:26 crc kubenswrapper[4796]: I1205 10:41:26.293081 4796 scope.go:117] "RemoveContainer" containerID="065c165a6b894700c5d5cdbb16945507d3068b2e69d8298c69dda717e830bf13" Dec 05 10:41:26 crc kubenswrapper[4796]: I1205 10:41:26.525580 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6b1d-account-create-h2cct" Dec 05 10:41:26 crc kubenswrapper[4796]: I1205 10:41:26.531011 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e210-account-create-qkfmm" Dec 05 10:41:26 crc kubenswrapper[4796]: I1205 10:41:26.611064 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zndmb\" (UniqueName: \"kubernetes.io/projected/ad5ee03d-3cd9-4633-bdf2-00942ae22258-kube-api-access-zndmb\") pod \"ad5ee03d-3cd9-4633-bdf2-00942ae22258\" (UID: \"ad5ee03d-3cd9-4633-bdf2-00942ae22258\") " Dec 05 10:41:26 crc kubenswrapper[4796]: I1205 10:41:26.611180 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8dkn\" (UniqueName: \"kubernetes.io/projected/ddc03506-8f42-469e-8315-b0bfe3b4c2be-kube-api-access-j8dkn\") pod \"ddc03506-8f42-469e-8315-b0bfe3b4c2be\" (UID: \"ddc03506-8f42-469e-8315-b0bfe3b4c2be\") " Dec 05 10:41:26 crc kubenswrapper[4796]: I1205 10:41:26.617230 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddc03506-8f42-469e-8315-b0bfe3b4c2be-kube-api-access-j8dkn" (OuterVolumeSpecName: "kube-api-access-j8dkn") pod "ddc03506-8f42-469e-8315-b0bfe3b4c2be" (UID: "ddc03506-8f42-469e-8315-b0bfe3b4c2be"). InnerVolumeSpecName "kube-api-access-j8dkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:41:26 crc kubenswrapper[4796]: I1205 10:41:26.617362 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad5ee03d-3cd9-4633-bdf2-00942ae22258-kube-api-access-zndmb" (OuterVolumeSpecName: "kube-api-access-zndmb") pod "ad5ee03d-3cd9-4633-bdf2-00942ae22258" (UID: "ad5ee03d-3cd9-4633-bdf2-00942ae22258"). InnerVolumeSpecName "kube-api-access-zndmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:41:26 crc kubenswrapper[4796]: I1205 10:41:26.680936 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75f565c8bf-4qfx5"] Dec 05 10:41:26 crc kubenswrapper[4796]: I1205 10:41:26.713888 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zndmb\" (UniqueName: \"kubernetes.io/projected/ad5ee03d-3cd9-4633-bdf2-00942ae22258-kube-api-access-zndmb\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:26 crc kubenswrapper[4796]: I1205 10:41:26.713910 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8dkn\" (UniqueName: \"kubernetes.io/projected/ddc03506-8f42-469e-8315-b0bfe3b4c2be-kube-api-access-j8dkn\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:27 crc kubenswrapper[4796]: I1205 10:41:27.219910 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-e210-account-create-qkfmm" event={"ID":"ad5ee03d-3cd9-4633-bdf2-00942ae22258","Type":"ContainerDied","Data":"a1f43ca864d2b7a7d37f87be0d888204a170d00af0020645c555cda4eb18845a"} Dec 05 10:41:27 crc kubenswrapper[4796]: I1205 10:41:27.219970 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1f43ca864d2b7a7d37f87be0d888204a170d00af0020645c555cda4eb18845a" Dec 05 10:41:27 crc kubenswrapper[4796]: I1205 10:41:27.219931 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-e210-account-create-qkfmm" Dec 05 10:41:27 crc kubenswrapper[4796]: I1205 10:41:27.222513 4796 generic.go:334] "Generic (PLEG): container finished" podID="2c80cf99-cad2-41a5-bda0-eb1c8334da35" containerID="cec4b014b607f67fddf128a8594ae9c1a16386d01bb7b83b345dffd8237007ba" exitCode=0 Dec 05 10:41:27 crc kubenswrapper[4796]: I1205 10:41:27.222532 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2c80cf99-cad2-41a5-bda0-eb1c8334da35","Type":"ContainerDied","Data":"cec4b014b607f67fddf128a8594ae9c1a16386d01bb7b83b345dffd8237007ba"} Dec 05 10:41:27 crc kubenswrapper[4796]: I1205 10:41:27.222553 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2c80cf99-cad2-41a5-bda0-eb1c8334da35","Type":"ContainerDied","Data":"9ec45a366dc039a2ea53db3ec81cd25b1c6d4872d3a7f535a48adf852311f188"} Dec 05 10:41:27 crc kubenswrapper[4796]: I1205 10:41:27.222538 4796 generic.go:334] "Generic (PLEG): container finished" podID="2c80cf99-cad2-41a5-bda0-eb1c8334da35" containerID="9ec45a366dc039a2ea53db3ec81cd25b1c6d4872d3a7f535a48adf852311f188" exitCode=143 Dec 05 10:41:27 crc kubenswrapper[4796]: I1205 10:41:27.225046 4796 generic.go:334] "Generic (PLEG): container finished" podID="684ee16c-6c42-46e9-b629-87cdff8b3076" containerID="ba66d9f478790115d2350defabdf10cc4573891262fb46a84feb943043315e92" exitCode=0 Dec 05 10:41:27 crc kubenswrapper[4796]: I1205 10:41:27.225074 4796 generic.go:334] "Generic (PLEG): container finished" podID="684ee16c-6c42-46e9-b629-87cdff8b3076" containerID="a100e0dbeb0419045eeedd96547edb2b013ab8cab5835652b907232f44cfd5c9" exitCode=143 Dec 05 10:41:27 crc kubenswrapper[4796]: I1205 10:41:27.225113 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"684ee16c-6c42-46e9-b629-87cdff8b3076","Type":"ContainerDied","Data":"ba66d9f478790115d2350defabdf10cc4573891262fb46a84feb943043315e92"} Dec 05 10:41:27 crc kubenswrapper[4796]: I1205 10:41:27.225138 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"684ee16c-6c42-46e9-b629-87cdff8b3076","Type":"ContainerDied","Data":"a100e0dbeb0419045eeedd96547edb2b013ab8cab5835652b907232f44cfd5c9"} Dec 05 10:41:27 crc kubenswrapper[4796]: I1205 10:41:27.226363 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6b1d-account-create-h2cct" Dec 05 10:41:27 crc kubenswrapper[4796]: I1205 10:41:27.226351 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6b1d-account-create-h2cct" event={"ID":"ddc03506-8f42-469e-8315-b0bfe3b4c2be","Type":"ContainerDied","Data":"e09215b0bcbe8579bf3c4185673f10703a4fbf409647953153f058bb449f0c6d"} Dec 05 10:41:27 crc kubenswrapper[4796]: I1205 10:41:27.226401 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e09215b0bcbe8579bf3c4185673f10703a4fbf409647953153f058bb449f0c6d" Dec 05 10:41:28 crc kubenswrapper[4796]: I1205 10:41:28.790931 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7ddql" Dec 05 10:41:28 crc kubenswrapper[4796]: I1205 10:41:28.791408 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7ddql" Dec 05 10:41:28 crc kubenswrapper[4796]: I1205 10:41:28.834856 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7ddql" Dec 05 10:41:29 crc kubenswrapper[4796]: I1205 10:41:29.282592 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7ddql" Dec 05 10:41:29 crc kubenswrapper[4796]: I1205 10:41:29.318718 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7ddql"] Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.198135 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8549c6756f-mnmb6"] Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.230589 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-69fdddc9b6-2ckhp"] Dec 05 10:41:30 crc kubenswrapper[4796]: E1205 10:41:30.231024 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddc03506-8f42-469e-8315-b0bfe3b4c2be" containerName="mariadb-account-create" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.231046 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddc03506-8f42-469e-8315-b0bfe3b4c2be" containerName="mariadb-account-create" Dec 05 10:41:30 crc kubenswrapper[4796]: E1205 10:41:30.231069 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad5ee03d-3cd9-4633-bdf2-00942ae22258" containerName="mariadb-account-create" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.231076 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad5ee03d-3cd9-4633-bdf2-00942ae22258" containerName="mariadb-account-create" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.231260 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddc03506-8f42-469e-8315-b0bfe3b4c2be" containerName="mariadb-account-create" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.231287 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad5ee03d-3cd9-4633-bdf2-00942ae22258" containerName="mariadb-account-create" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.232216 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.234393 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.240411 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69fdddc9b6-2ckhp"] Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.252335 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75f565c8bf-4qfx5"] Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.285387 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-78ddb58-f7j44"] Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.286900 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.294259 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78ddb58-f7j44"] Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.395569 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdca92fe-39ad-41e9-978b-1757290eee03-combined-ca-bundle\") pod \"horizon-69fdddc9b6-2ckhp\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.395614 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fdca92fe-39ad-41e9-978b-1757290eee03-horizon-secret-key\") pod \"horizon-69fdddc9b6-2ckhp\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.395639 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdca92fe-39ad-41e9-978b-1757290eee03-horizon-tls-certs\") pod \"horizon-69fdddc9b6-2ckhp\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.395744 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdca92fe-39ad-41e9-978b-1757290eee03-logs\") pod \"horizon-69fdddc9b6-2ckhp\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.395784 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05623376-2343-40fb-a4df-508ce1e333e2-logs\") pod \"horizon-78ddb58-f7j44\" (UID: \"05623376-2343-40fb-a4df-508ce1e333e2\") " pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.395808 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05623376-2343-40fb-a4df-508ce1e333e2-combined-ca-bundle\") pod \"horizon-78ddb58-f7j44\" (UID: \"05623376-2343-40fb-a4df-508ce1e333e2\") " pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.395827 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/05623376-2343-40fb-a4df-508ce1e333e2-horizon-secret-key\") pod \"horizon-78ddb58-f7j44\" (UID: \"05623376-2343-40fb-a4df-508ce1e333e2\") " pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.395878 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6flc9\" (UniqueName: \"kubernetes.io/projected/fdca92fe-39ad-41e9-978b-1757290eee03-kube-api-access-6flc9\") pod \"horizon-69fdddc9b6-2ckhp\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.395914 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7dc6\" (UniqueName: \"kubernetes.io/projected/05623376-2343-40fb-a4df-508ce1e333e2-kube-api-access-v7dc6\") pod \"horizon-78ddb58-f7j44\" (UID: \"05623376-2343-40fb-a4df-508ce1e333e2\") " pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.395944 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fdca92fe-39ad-41e9-978b-1757290eee03-config-data\") pod \"horizon-69fdddc9b6-2ckhp\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.396141 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/05623376-2343-40fb-a4df-508ce1e333e2-horizon-tls-certs\") pod \"horizon-78ddb58-f7j44\" (UID: \"05623376-2343-40fb-a4df-508ce1e333e2\") " pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.396216 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05623376-2343-40fb-a4df-508ce1e333e2-config-data\") pod \"horizon-78ddb58-f7j44\" (UID: \"05623376-2343-40fb-a4df-508ce1e333e2\") " pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.396304 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05623376-2343-40fb-a4df-508ce1e333e2-scripts\") pod \"horizon-78ddb58-f7j44\" (UID: \"05623376-2343-40fb-a4df-508ce1e333e2\") " pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.396376 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fdca92fe-39ad-41e9-978b-1757290eee03-scripts\") pod \"horizon-69fdddc9b6-2ckhp\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.498405 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05623376-2343-40fb-a4df-508ce1e333e2-combined-ca-bundle\") pod \"horizon-78ddb58-f7j44\" (UID: \"05623376-2343-40fb-a4df-508ce1e333e2\") " pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.498444 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/05623376-2343-40fb-a4df-508ce1e333e2-horizon-secret-key\") pod \"horizon-78ddb58-f7j44\" (UID: \"05623376-2343-40fb-a4df-508ce1e333e2\") " pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.498490 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6flc9\" (UniqueName: \"kubernetes.io/projected/fdca92fe-39ad-41e9-978b-1757290eee03-kube-api-access-6flc9\") pod \"horizon-69fdddc9b6-2ckhp\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.498523 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7dc6\" (UniqueName: \"kubernetes.io/projected/05623376-2343-40fb-a4df-508ce1e333e2-kube-api-access-v7dc6\") pod \"horizon-78ddb58-f7j44\" (UID: \"05623376-2343-40fb-a4df-508ce1e333e2\") " pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.498556 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fdca92fe-39ad-41e9-978b-1757290eee03-config-data\") pod \"horizon-69fdddc9b6-2ckhp\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.498603 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/05623376-2343-40fb-a4df-508ce1e333e2-horizon-tls-certs\") pod \"horizon-78ddb58-f7j44\" (UID: \"05623376-2343-40fb-a4df-508ce1e333e2\") " pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.498622 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05623376-2343-40fb-a4df-508ce1e333e2-config-data\") pod \"horizon-78ddb58-f7j44\" (UID: \"05623376-2343-40fb-a4df-508ce1e333e2\") " pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.498649 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05623376-2343-40fb-a4df-508ce1e333e2-scripts\") pod \"horizon-78ddb58-f7j44\" (UID: \"05623376-2343-40fb-a4df-508ce1e333e2\") " pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.498670 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fdca92fe-39ad-41e9-978b-1757290eee03-scripts\") pod \"horizon-69fdddc9b6-2ckhp\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.498723 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdca92fe-39ad-41e9-978b-1757290eee03-combined-ca-bundle\") pod \"horizon-69fdddc9b6-2ckhp\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.498742 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fdca92fe-39ad-41e9-978b-1757290eee03-horizon-secret-key\") pod \"horizon-69fdddc9b6-2ckhp\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.498759 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdca92fe-39ad-41e9-978b-1757290eee03-horizon-tls-certs\") pod \"horizon-69fdddc9b6-2ckhp\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.498801 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdca92fe-39ad-41e9-978b-1757290eee03-logs\") pod \"horizon-69fdddc9b6-2ckhp\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.498818 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05623376-2343-40fb-a4df-508ce1e333e2-logs\") pod \"horizon-78ddb58-f7j44\" (UID: \"05623376-2343-40fb-a4df-508ce1e333e2\") " pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.499237 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05623376-2343-40fb-a4df-508ce1e333e2-logs\") pod \"horizon-78ddb58-f7j44\" (UID: \"05623376-2343-40fb-a4df-508ce1e333e2\") " pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.499507 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05623376-2343-40fb-a4df-508ce1e333e2-scripts\") pod \"horizon-78ddb58-f7j44\" (UID: \"05623376-2343-40fb-a4df-508ce1e333e2\") " pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.500478 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fdca92fe-39ad-41e9-978b-1757290eee03-scripts\") pod \"horizon-69fdddc9b6-2ckhp\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.500770 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05623376-2343-40fb-a4df-508ce1e333e2-config-data\") pod \"horizon-78ddb58-f7j44\" (UID: \"05623376-2343-40fb-a4df-508ce1e333e2\") " pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.501524 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdca92fe-39ad-41e9-978b-1757290eee03-logs\") pod \"horizon-69fdddc9b6-2ckhp\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.503962 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fdca92fe-39ad-41e9-978b-1757290eee03-config-data\") pod \"horizon-69fdddc9b6-2ckhp\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.505316 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/05623376-2343-40fb-a4df-508ce1e333e2-horizon-tls-certs\") pod \"horizon-78ddb58-f7j44\" (UID: \"05623376-2343-40fb-a4df-508ce1e333e2\") " pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.506035 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/05623376-2343-40fb-a4df-508ce1e333e2-horizon-secret-key\") pod \"horizon-78ddb58-f7j44\" (UID: \"05623376-2343-40fb-a4df-508ce1e333e2\") " pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.509783 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05623376-2343-40fb-a4df-508ce1e333e2-combined-ca-bundle\") pod \"horizon-78ddb58-f7j44\" (UID: \"05623376-2343-40fb-a4df-508ce1e333e2\") " pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.509901 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdca92fe-39ad-41e9-978b-1757290eee03-combined-ca-bundle\") pod \"horizon-69fdddc9b6-2ckhp\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.510777 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fdca92fe-39ad-41e9-978b-1757290eee03-horizon-secret-key\") pod \"horizon-69fdddc9b6-2ckhp\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.512611 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdca92fe-39ad-41e9-978b-1757290eee03-horizon-tls-certs\") pod \"horizon-69fdddc9b6-2ckhp\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.515452 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6flc9\" (UniqueName: \"kubernetes.io/projected/fdca92fe-39ad-41e9-978b-1757290eee03-kube-api-access-6flc9\") pod \"horizon-69fdddc9b6-2ckhp\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.516324 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7dc6\" (UniqueName: \"kubernetes.io/projected/05623376-2343-40fb-a4df-508ce1e333e2-kube-api-access-v7dc6\") pod \"horizon-78ddb58-f7j44\" (UID: \"05623376-2343-40fb-a4df-508ce1e333e2\") " pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.551165 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.600122 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:30 crc kubenswrapper[4796]: I1205 10:41:30.990942 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:41:31 crc kubenswrapper[4796]: I1205 10:41:31.048484 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864b648dc7-zjf7j"] Dec 05 10:41:31 crc kubenswrapper[4796]: I1205 10:41:31.050197 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" podUID="885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5" containerName="dnsmasq-dns" containerID="cri-o://048621eedacee387ce4bc4e7a8d288b150a894c61773fd4e86218171887be854" gracePeriod=10 Dec 05 10:41:31 crc kubenswrapper[4796]: I1205 10:41:31.285973 4796 generic.go:334] "Generic (PLEG): container finished" podID="885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5" containerID="048621eedacee387ce4bc4e7a8d288b150a894c61773fd4e86218171887be854" exitCode=0 Dec 05 10:41:31 crc kubenswrapper[4796]: I1205 10:41:31.286259 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7ddql" podUID="4c607361-8750-4197-ab5d-071d2d63ba1f" containerName="registry-server" containerID="cri-o://216bcca1dfa13bc88515e966c688b3f57739a7d50783f5f2fb1d1553397e8112" gracePeriod=2 Dec 05 10:41:31 crc kubenswrapper[4796]: I1205 10:41:31.286605 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" event={"ID":"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5","Type":"ContainerDied","Data":"048621eedacee387ce4bc4e7a8d288b150a894c61773fd4e86218171887be854"} Dec 05 10:41:31 crc kubenswrapper[4796]: I1205 10:41:31.820535 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-46e3-account-create-4b4vf"] Dec 05 10:41:31 crc kubenswrapper[4796]: I1205 10:41:31.822453 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-46e3-account-create-4b4vf" Dec 05 10:41:31 crc kubenswrapper[4796]: I1205 10:41:31.826989 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 05 10:41:31 crc kubenswrapper[4796]: I1205 10:41:31.839916 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-46e3-account-create-4b4vf"] Dec 05 10:41:31 crc kubenswrapper[4796]: I1205 10:41:31.848492 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-qh5mr"] Dec 05 10:41:31 crc kubenswrapper[4796]: I1205 10:41:31.849801 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qh5mr" Dec 05 10:41:31 crc kubenswrapper[4796]: I1205 10:41:31.851405 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-sw6dh" Dec 05 10:41:31 crc kubenswrapper[4796]: I1205 10:41:31.851550 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 05 10:41:31 crc kubenswrapper[4796]: I1205 10:41:31.852805 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 05 10:41:31 crc kubenswrapper[4796]: I1205 10:41:31.854956 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qh5mr"] Dec 05 10:41:31 crc kubenswrapper[4796]: I1205 10:41:31.939584 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zkrb\" (UniqueName: \"kubernetes.io/projected/2c3e7b5f-0297-41f7-b4c6-0ac3ddb91156-kube-api-access-6zkrb\") pod \"neutron-46e3-account-create-4b4vf\" (UID: \"2c3e7b5f-0297-41f7-b4c6-0ac3ddb91156\") " pod="openstack/neutron-46e3-account-create-4b4vf" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.041821 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-config-data\") pod \"cinder-db-sync-qh5mr\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " pod="openstack/cinder-db-sync-qh5mr" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.041868 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h497x\" (UniqueName: \"kubernetes.io/projected/04e95ca1-d131-4528-baaf-0be6b98a5edf-kube-api-access-h497x\") pod \"cinder-db-sync-qh5mr\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " pod="openstack/cinder-db-sync-qh5mr" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.042033 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zkrb\" (UniqueName: \"kubernetes.io/projected/2c3e7b5f-0297-41f7-b4c6-0ac3ddb91156-kube-api-access-6zkrb\") pod \"neutron-46e3-account-create-4b4vf\" (UID: \"2c3e7b5f-0297-41f7-b4c6-0ac3ddb91156\") " pod="openstack/neutron-46e3-account-create-4b4vf" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.042081 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-db-sync-config-data\") pod \"cinder-db-sync-qh5mr\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " pod="openstack/cinder-db-sync-qh5mr" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.042235 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04e95ca1-d131-4528-baaf-0be6b98a5edf-etc-machine-id\") pod \"cinder-db-sync-qh5mr\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " pod="openstack/cinder-db-sync-qh5mr" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.042343 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-combined-ca-bundle\") pod \"cinder-db-sync-qh5mr\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " pod="openstack/cinder-db-sync-qh5mr" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.042394 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-scripts\") pod \"cinder-db-sync-qh5mr\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " pod="openstack/cinder-db-sync-qh5mr" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.062780 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zkrb\" (UniqueName: \"kubernetes.io/projected/2c3e7b5f-0297-41f7-b4c6-0ac3ddb91156-kube-api-access-6zkrb\") pod \"neutron-46e3-account-create-4b4vf\" (UID: \"2c3e7b5f-0297-41f7-b4c6-0ac3ddb91156\") " pod="openstack/neutron-46e3-account-create-4b4vf" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.140032 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-46e3-account-create-4b4vf" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.143673 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-combined-ca-bundle\") pod \"cinder-db-sync-qh5mr\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " pod="openstack/cinder-db-sync-qh5mr" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.143731 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-scripts\") pod \"cinder-db-sync-qh5mr\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " pod="openstack/cinder-db-sync-qh5mr" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.143777 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-config-data\") pod \"cinder-db-sync-qh5mr\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " pod="openstack/cinder-db-sync-qh5mr" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.143802 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h497x\" (UniqueName: \"kubernetes.io/projected/04e95ca1-d131-4528-baaf-0be6b98a5edf-kube-api-access-h497x\") pod \"cinder-db-sync-qh5mr\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " pod="openstack/cinder-db-sync-qh5mr" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.143840 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-db-sync-config-data\") pod \"cinder-db-sync-qh5mr\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " pod="openstack/cinder-db-sync-qh5mr" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.143884 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04e95ca1-d131-4528-baaf-0be6b98a5edf-etc-machine-id\") pod \"cinder-db-sync-qh5mr\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " pod="openstack/cinder-db-sync-qh5mr" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.143942 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04e95ca1-d131-4528-baaf-0be6b98a5edf-etc-machine-id\") pod \"cinder-db-sync-qh5mr\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " pod="openstack/cinder-db-sync-qh5mr" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.152290 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-config-data\") pod \"cinder-db-sync-qh5mr\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " pod="openstack/cinder-db-sync-qh5mr" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.154005 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-db-sync-config-data\") pod \"cinder-db-sync-qh5mr\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " pod="openstack/cinder-db-sync-qh5mr" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.158553 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-combined-ca-bundle\") pod \"cinder-db-sync-qh5mr\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " pod="openstack/cinder-db-sync-qh5mr" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.159254 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-scripts\") pod \"cinder-db-sync-qh5mr\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " pod="openstack/cinder-db-sync-qh5mr" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.165302 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-2fftm"] Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.166486 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2fftm" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.168069 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2f7st" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.168418 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.173120 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2fftm"] Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.176514 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h497x\" (UniqueName: \"kubernetes.io/projected/04e95ca1-d131-4528-baaf-0be6b98a5edf-kube-api-access-h497x\") pod \"cinder-db-sync-qh5mr\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " pod="openstack/cinder-db-sync-qh5mr" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.308762 4796 generic.go:334] "Generic (PLEG): container finished" podID="4c607361-8750-4197-ab5d-071d2d63ba1f" containerID="216bcca1dfa13bc88515e966c688b3f57739a7d50783f5f2fb1d1553397e8112" exitCode=0 Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.308839 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ddql" event={"ID":"4c607361-8750-4197-ab5d-071d2d63ba1f","Type":"ContainerDied","Data":"216bcca1dfa13bc88515e966c688b3f57739a7d50783f5f2fb1d1553397e8112"} Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.347887 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b644d420-f868-46a7-9e03-1d6fc4f78894-combined-ca-bundle\") pod \"barbican-db-sync-2fftm\" (UID: \"b644d420-f868-46a7-9e03-1d6fc4f78894\") " pod="openstack/barbican-db-sync-2fftm" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.348016 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b644d420-f868-46a7-9e03-1d6fc4f78894-db-sync-config-data\") pod \"barbican-db-sync-2fftm\" (UID: \"b644d420-f868-46a7-9e03-1d6fc4f78894\") " pod="openstack/barbican-db-sync-2fftm" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.348092 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj47r\" (UniqueName: \"kubernetes.io/projected/b644d420-f868-46a7-9e03-1d6fc4f78894-kube-api-access-sj47r\") pod \"barbican-db-sync-2fftm\" (UID: \"b644d420-f868-46a7-9e03-1d6fc4f78894\") " pod="openstack/barbican-db-sync-2fftm" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.451120 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b644d420-f868-46a7-9e03-1d6fc4f78894-combined-ca-bundle\") pod \"barbican-db-sync-2fftm\" (UID: \"b644d420-f868-46a7-9e03-1d6fc4f78894\") " pod="openstack/barbican-db-sync-2fftm" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.451191 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b644d420-f868-46a7-9e03-1d6fc4f78894-db-sync-config-data\") pod \"barbican-db-sync-2fftm\" (UID: \"b644d420-f868-46a7-9e03-1d6fc4f78894\") " pod="openstack/barbican-db-sync-2fftm" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.451275 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj47r\" (UniqueName: \"kubernetes.io/projected/b644d420-f868-46a7-9e03-1d6fc4f78894-kube-api-access-sj47r\") pod \"barbican-db-sync-2fftm\" (UID: \"b644d420-f868-46a7-9e03-1d6fc4f78894\") " pod="openstack/barbican-db-sync-2fftm" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.457476 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b644d420-f868-46a7-9e03-1d6fc4f78894-combined-ca-bundle\") pod \"barbican-db-sync-2fftm\" (UID: \"b644d420-f868-46a7-9e03-1d6fc4f78894\") " pod="openstack/barbican-db-sync-2fftm" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.457636 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b644d420-f868-46a7-9e03-1d6fc4f78894-db-sync-config-data\") pod \"barbican-db-sync-2fftm\" (UID: \"b644d420-f868-46a7-9e03-1d6fc4f78894\") " pod="openstack/barbican-db-sync-2fftm" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.464981 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qh5mr" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.465473 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj47r\" (UniqueName: \"kubernetes.io/projected/b644d420-f868-46a7-9e03-1d6fc4f78894-kube-api-access-sj47r\") pod \"barbican-db-sync-2fftm\" (UID: \"b644d420-f868-46a7-9e03-1d6fc4f78894\") " pod="openstack/barbican-db-sync-2fftm" Dec 05 10:41:32 crc kubenswrapper[4796]: I1205 10:41:32.533267 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2fftm" Dec 05 10:41:34 crc kubenswrapper[4796]: W1205 10:41:34.885176 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9dd19b1_8fb3_439c_80e1_126c13ca90da.slice/crio-75989a5658acf5d16e8fa3f4a37d909fddca28cc4b534663b6602d5cee3944ac WatchSource:0}: Error finding container 75989a5658acf5d16e8fa3f4a37d909fddca28cc4b534663b6602d5cee3944ac: Status 404 returned error can't find the container with id 75989a5658acf5d16e8fa3f4a37d909fddca28cc4b534663b6602d5cee3944ac Dec 05 10:41:34 crc kubenswrapper[4796]: I1205 10:41:34.962254 4796 scope.go:117] "RemoveContainer" containerID="3f186b966a6eb883079b17a1d3993fc3374a562e5a349cc1cdc876b8df78141c" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.022068 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dfsrl" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.023490 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-combined-ca-bundle\") pod \"cf200531-c073-4d03-916a-cb3f54c5aa89\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.023540 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-credential-keys\") pod \"cf200531-c073-4d03-916a-cb3f54c5aa89\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.023623 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnbg4\" (UniqueName: \"kubernetes.io/projected/cf200531-c073-4d03-916a-cb3f54c5aa89-kube-api-access-gnbg4\") pod \"cf200531-c073-4d03-916a-cb3f54c5aa89\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.023677 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-fernet-keys\") pod \"cf200531-c073-4d03-916a-cb3f54c5aa89\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.023748 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-config-data\") pod \"cf200531-c073-4d03-916a-cb3f54c5aa89\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.023860 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-scripts\") pod \"cf200531-c073-4d03-916a-cb3f54c5aa89\" (UID: \"cf200531-c073-4d03-916a-cb3f54c5aa89\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.027460 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "cf200531-c073-4d03-916a-cb3f54c5aa89" (UID: "cf200531-c073-4d03-916a-cb3f54c5aa89"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.028086 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "cf200531-c073-4d03-916a-cb3f54c5aa89" (UID: "cf200531-c073-4d03-916a-cb3f54c5aa89"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.028985 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf200531-c073-4d03-916a-cb3f54c5aa89-kube-api-access-gnbg4" (OuterVolumeSpecName: "kube-api-access-gnbg4") pod "cf200531-c073-4d03-916a-cb3f54c5aa89" (UID: "cf200531-c073-4d03-916a-cb3f54c5aa89"). InnerVolumeSpecName "kube-api-access-gnbg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.047646 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf200531-c073-4d03-916a-cb3f54c5aa89" (UID: "cf200531-c073-4d03-916a-cb3f54c5aa89"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.048341 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-scripts" (OuterVolumeSpecName: "scripts") pod "cf200531-c073-4d03-916a-cb3f54c5aa89" (UID: "cf200531-c073-4d03-916a-cb3f54c5aa89"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.054348 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-config-data" (OuterVolumeSpecName: "config-data") pod "cf200531-c073-4d03-916a-cb3f54c5aa89" (UID: "cf200531-c073-4d03-916a-cb3f54c5aa89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.065392 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.097623 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.125500 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/684ee16c-6c42-46e9-b629-87cdff8b3076-logs\") pod \"684ee16c-6c42-46e9-b629-87cdff8b3076\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.125572 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-combined-ca-bundle\") pod \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.125596 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-config-data\") pod \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.125894 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-internal-tls-certs\") pod \"684ee16c-6c42-46e9-b629-87cdff8b3076\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.125918 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.125967 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-public-tls-certs\") pod \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.125979 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/684ee16c-6c42-46e9-b629-87cdff8b3076-logs" (OuterVolumeSpecName: "logs") pod "684ee16c-6c42-46e9-b629-87cdff8b3076" (UID: "684ee16c-6c42-46e9-b629-87cdff8b3076"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.126030 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"684ee16c-6c42-46e9-b629-87cdff8b3076\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.126056 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-config-data\") pod \"684ee16c-6c42-46e9-b629-87cdff8b3076\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.126076 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-scripts\") pod \"684ee16c-6c42-46e9-b629-87cdff8b3076\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.126167 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/684ee16c-6c42-46e9-b629-87cdff8b3076-httpd-run\") pod \"684ee16c-6c42-46e9-b629-87cdff8b3076\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.126196 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c80cf99-cad2-41a5-bda0-eb1c8334da35-logs\") pod \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.126280 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcn6f\" (UniqueName: \"kubernetes.io/projected/684ee16c-6c42-46e9-b629-87cdff8b3076-kube-api-access-gcn6f\") pod \"684ee16c-6c42-46e9-b629-87cdff8b3076\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.126307 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-combined-ca-bundle\") pod \"684ee16c-6c42-46e9-b629-87cdff8b3076\" (UID: \"684ee16c-6c42-46e9-b629-87cdff8b3076\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.126358 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-scripts\") pod \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.126379 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsvq5\" (UniqueName: \"kubernetes.io/projected/2c80cf99-cad2-41a5-bda0-eb1c8334da35-kube-api-access-zsvq5\") pod \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.126395 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2c80cf99-cad2-41a5-bda0-eb1c8334da35-httpd-run\") pod \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\" (UID: \"2c80cf99-cad2-41a5-bda0-eb1c8334da35\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.127342 4796 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.127359 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/684ee16c-6c42-46e9-b629-87cdff8b3076-logs\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.127369 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnbg4\" (UniqueName: \"kubernetes.io/projected/cf200531-c073-4d03-916a-cb3f54c5aa89-kube-api-access-gnbg4\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.127378 4796 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.127387 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.127396 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.127423 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf200531-c073-4d03-916a-cb3f54c5aa89-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.131305 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "2c80cf99-cad2-41a5-bda0-eb1c8334da35" (UID: "2c80cf99-cad2-41a5-bda0-eb1c8334da35"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.135782 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c80cf99-cad2-41a5-bda0-eb1c8334da35-logs" (OuterVolumeSpecName: "logs") pod "2c80cf99-cad2-41a5-bda0-eb1c8334da35" (UID: "2c80cf99-cad2-41a5-bda0-eb1c8334da35"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.135829 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/684ee16c-6c42-46e9-b629-87cdff8b3076-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "684ee16c-6c42-46e9-b629-87cdff8b3076" (UID: "684ee16c-6c42-46e9-b629-87cdff8b3076"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.136283 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c80cf99-cad2-41a5-bda0-eb1c8334da35-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2c80cf99-cad2-41a5-bda0-eb1c8334da35" (UID: "2c80cf99-cad2-41a5-bda0-eb1c8334da35"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.137812 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-scripts" (OuterVolumeSpecName: "scripts") pod "2c80cf99-cad2-41a5-bda0-eb1c8334da35" (UID: "2c80cf99-cad2-41a5-bda0-eb1c8334da35"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.144784 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c80cf99-cad2-41a5-bda0-eb1c8334da35-kube-api-access-zsvq5" (OuterVolumeSpecName: "kube-api-access-zsvq5") pod "2c80cf99-cad2-41a5-bda0-eb1c8334da35" (UID: "2c80cf99-cad2-41a5-bda0-eb1c8334da35"). InnerVolumeSpecName "kube-api-access-zsvq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.153639 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/684ee16c-6c42-46e9-b629-87cdff8b3076-kube-api-access-gcn6f" (OuterVolumeSpecName: "kube-api-access-gcn6f") pod "684ee16c-6c42-46e9-b629-87cdff8b3076" (UID: "684ee16c-6c42-46e9-b629-87cdff8b3076"). InnerVolumeSpecName "kube-api-access-gcn6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.165265 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "684ee16c-6c42-46e9-b629-87cdff8b3076" (UID: "684ee16c-6c42-46e9-b629-87cdff8b3076"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.168764 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-scripts" (OuterVolumeSpecName: "scripts") pod "684ee16c-6c42-46e9-b629-87cdff8b3076" (UID: "684ee16c-6c42-46e9-b629-87cdff8b3076"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.174260 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c80cf99-cad2-41a5-bda0-eb1c8334da35" (UID: "2c80cf99-cad2-41a5-bda0-eb1c8334da35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.187988 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-config-data" (OuterVolumeSpecName: "config-data") pod "2c80cf99-cad2-41a5-bda0-eb1c8334da35" (UID: "2c80cf99-cad2-41a5-bda0-eb1c8334da35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.189135 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "684ee16c-6c42-46e9-b629-87cdff8b3076" (UID: "684ee16c-6c42-46e9-b629-87cdff8b3076"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.189651 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "684ee16c-6c42-46e9-b629-87cdff8b3076" (UID: "684ee16c-6c42-46e9-b629-87cdff8b3076"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.219064 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-config-data" (OuterVolumeSpecName: "config-data") pod "684ee16c-6c42-46e9-b629-87cdff8b3076" (UID: "684ee16c-6c42-46e9-b629-87cdff8b3076"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.222447 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2c80cf99-cad2-41a5-bda0-eb1c8334da35" (UID: "2c80cf99-cad2-41a5-bda0-eb1c8334da35"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.229416 4796 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.229564 4796 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.229640 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.229889 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.229967 4796 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/684ee16c-6c42-46e9-b629-87cdff8b3076-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.230038 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2c80cf99-cad2-41a5-bda0-eb1c8334da35-logs\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.230099 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcn6f\" (UniqueName: \"kubernetes.io/projected/684ee16c-6c42-46e9-b629-87cdff8b3076-kube-api-access-gcn6f\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.230259 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.230322 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.230376 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsvq5\" (UniqueName: \"kubernetes.io/projected/2c80cf99-cad2-41a5-bda0-eb1c8334da35-kube-api-access-zsvq5\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.230423 4796 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2c80cf99-cad2-41a5-bda0-eb1c8334da35-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.230485 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.230535 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c80cf99-cad2-41a5-bda0-eb1c8334da35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.230599 4796 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/684ee16c-6c42-46e9-b629-87cdff8b3076-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.230649 4796 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.237722 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ddql" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.243729 4796 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.253314 4796 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.332237 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c607361-8750-4197-ab5d-071d2d63ba1f-utilities\") pod \"4c607361-8750-4197-ab5d-071d2d63ba1f\" (UID: \"4c607361-8750-4197-ab5d-071d2d63ba1f\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.332659 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbb5h\" (UniqueName: \"kubernetes.io/projected/4c607361-8750-4197-ab5d-071d2d63ba1f-kube-api-access-sbb5h\") pod \"4c607361-8750-4197-ab5d-071d2d63ba1f\" (UID: \"4c607361-8750-4197-ab5d-071d2d63ba1f\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.332798 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c607361-8750-4197-ab5d-071d2d63ba1f-catalog-content\") pod \"4c607361-8750-4197-ab5d-071d2d63ba1f\" (UID: \"4c607361-8750-4197-ab5d-071d2d63ba1f\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.332898 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c607361-8750-4197-ab5d-071d2d63ba1f-utilities" (OuterVolumeSpecName: "utilities") pod "4c607361-8750-4197-ab5d-071d2d63ba1f" (UID: "4c607361-8750-4197-ab5d-071d2d63ba1f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.333120 4796 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.333138 4796 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.333146 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c607361-8750-4197-ab5d-071d2d63ba1f-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.337610 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c607361-8750-4197-ab5d-071d2d63ba1f-kube-api-access-sbb5h" (OuterVolumeSpecName: "kube-api-access-sbb5h") pod "4c607361-8750-4197-ab5d-071d2d63ba1f" (UID: "4c607361-8750-4197-ab5d-071d2d63ba1f"). InnerVolumeSpecName "kube-api-access-sbb5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.347518 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.352407 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2c80cf99-cad2-41a5-bda0-eb1c8334da35","Type":"ContainerDied","Data":"186e377270a0b5ddaf9e7496801b641ac1e15e876d417b55251f4a2d24523ac9"} Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.352443 4796 scope.go:117] "RemoveContainer" containerID="cec4b014b607f67fddf128a8594ae9c1a16386d01bb7b83b345dffd8237007ba" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.352501 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.356787 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" event={"ID":"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5","Type":"ContainerDied","Data":"5c731b1214e37a7703e573cf49c3af597016a8b40ce923f04ae266fb76f540c3"} Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.356892 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.359663 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7ddql" event={"ID":"4c607361-8750-4197-ab5d-071d2d63ba1f","Type":"ContainerDied","Data":"16216d38d4f7724951b78b28e6b7f0be22dd275847a26fd9625d4ee48d80633d"} Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.359735 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7ddql" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.372534 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75f565c8bf-4qfx5" event={"ID":"e9dd19b1-8fb3-439c-80e1-126c13ca90da","Type":"ContainerStarted","Data":"75989a5658acf5d16e8fa3f4a37d909fddca28cc4b534663b6602d5cee3944ac"} Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.381305 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c607361-8750-4197-ab5d-071d2d63ba1f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c607361-8750-4197-ab5d-071d2d63ba1f" (UID: "4c607361-8750-4197-ab5d-071d2d63ba1f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.387987 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dfsrl" event={"ID":"cf200531-c073-4d03-916a-cb3f54c5aa89","Type":"ContainerDied","Data":"455a0ba9c3714e2b07b0804e6a32ce495e894ab27022c9824103b9e53b753e13"} Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.388019 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="455a0ba9c3714e2b07b0804e6a32ce495e894ab27022c9824103b9e53b753e13" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.388085 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dfsrl" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.392745 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"684ee16c-6c42-46e9-b629-87cdff8b3076","Type":"ContainerDied","Data":"7ca51cc3610c38dd2d0d3587378ccbd92143bcca94fe195698d9e822604e1068"} Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.392883 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.395110 4796 scope.go:117] "RemoveContainer" containerID="9ec45a366dc039a2ea53db3ec81cd25b1c6d4872d3a7f535a48adf852311f188" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.437131 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.438661 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fd29\" (UniqueName: \"kubernetes.io/projected/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-kube-api-access-9fd29\") pod \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.438825 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-dns-svc\") pod \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.439280 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-dns-swift-storage-0\") pod \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.439376 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-config\") pod \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.439432 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-ovsdbserver-nb\") pod \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.439527 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-ovsdbserver-sb\") pod \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\" (UID: \"885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5\") " Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.440212 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c607361-8750-4197-ab5d-071d2d63ba1f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.440227 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbb5h\" (UniqueName: \"kubernetes.io/projected/4c607361-8750-4197-ab5d-071d2d63ba1f-kube-api-access-sbb5h\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.446331 4796 scope.go:117] "RemoveContainer" containerID="048621eedacee387ce4bc4e7a8d288b150a894c61773fd4e86218171887be854" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.447084 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.452932 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 10:41:35 crc kubenswrapper[4796]: E1205 10:41:35.453339 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="684ee16c-6c42-46e9-b629-87cdff8b3076" containerName="glance-log" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.453357 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="684ee16c-6c42-46e9-b629-87cdff8b3076" containerName="glance-log" Dec 05 10:41:35 crc kubenswrapper[4796]: E1205 10:41:35.453365 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c80cf99-cad2-41a5-bda0-eb1c8334da35" containerName="glance-httpd" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.453371 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c80cf99-cad2-41a5-bda0-eb1c8334da35" containerName="glance-httpd" Dec 05 10:41:35 crc kubenswrapper[4796]: E1205 10:41:35.453379 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c607361-8750-4197-ab5d-071d2d63ba1f" containerName="extract-content" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.453387 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c607361-8750-4197-ab5d-071d2d63ba1f" containerName="extract-content" Dec 05 10:41:35 crc kubenswrapper[4796]: E1205 10:41:35.453398 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c607361-8750-4197-ab5d-071d2d63ba1f" containerName="extract-utilities" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.453405 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c607361-8750-4197-ab5d-071d2d63ba1f" containerName="extract-utilities" Dec 05 10:41:35 crc kubenswrapper[4796]: E1205 10:41:35.453416 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c607361-8750-4197-ab5d-071d2d63ba1f" containerName="registry-server" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.453421 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c607361-8750-4197-ab5d-071d2d63ba1f" containerName="registry-server" Dec 05 10:41:35 crc kubenswrapper[4796]: E1205 10:41:35.453431 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5" containerName="init" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.453437 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5" containerName="init" Dec 05 10:41:35 crc kubenswrapper[4796]: E1205 10:41:35.453445 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5" containerName="dnsmasq-dns" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.453451 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5" containerName="dnsmasq-dns" Dec 05 10:41:35 crc kubenswrapper[4796]: E1205 10:41:35.453463 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c80cf99-cad2-41a5-bda0-eb1c8334da35" containerName="glance-log" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.453468 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c80cf99-cad2-41a5-bda0-eb1c8334da35" containerName="glance-log" Dec 05 10:41:35 crc kubenswrapper[4796]: E1205 10:41:35.453483 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="684ee16c-6c42-46e9-b629-87cdff8b3076" containerName="glance-httpd" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.453488 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="684ee16c-6c42-46e9-b629-87cdff8b3076" containerName="glance-httpd" Dec 05 10:41:35 crc kubenswrapper[4796]: E1205 10:41:35.453503 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf200531-c073-4d03-916a-cb3f54c5aa89" containerName="keystone-bootstrap" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.453509 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf200531-c073-4d03-916a-cb3f54c5aa89" containerName="keystone-bootstrap" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.453660 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="684ee16c-6c42-46e9-b629-87cdff8b3076" containerName="glance-log" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.453673 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="684ee16c-6c42-46e9-b629-87cdff8b3076" containerName="glance-httpd" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.453697 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c607361-8750-4197-ab5d-071d2d63ba1f" containerName="registry-server" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.453707 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5" containerName="dnsmasq-dns" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.453716 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c80cf99-cad2-41a5-bda0-eb1c8334da35" containerName="glance-httpd" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.453724 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf200531-c073-4d03-916a-cb3f54c5aa89" containerName="keystone-bootstrap" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.453735 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c80cf99-cad2-41a5-bda0-eb1c8334da35" containerName="glance-log" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.454608 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.457577 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-rlc4g" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.457755 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.457938 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.458042 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.470696 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.483696 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.488263 4796 scope.go:117] "RemoveContainer" containerID="8d35e8478ce2098bacd764825f62c538e503f00c0a23634a97440aa077ebdf05" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.490804 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.493588 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-kube-api-access-9fd29" (OuterVolumeSpecName: "kube-api-access-9fd29") pod "885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5" (UID: "885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5"). InnerVolumeSpecName "kube-api-access-9fd29". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.498676 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.500346 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.504225 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.505592 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.521813 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.543061 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3481f623-3439-4c17-ab95-7bf31e8fa3a0-logs\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.543111 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-config-data\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.543383 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.543451 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-scripts\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.543489 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.543664 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3481f623-3439-4c17-ab95-7bf31e8fa3a0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.543741 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.544104 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fd29\" (UniqueName: \"kubernetes.io/projected/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-kube-api-access-9fd29\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.558954 4796 scope.go:117] "RemoveContainer" containerID="216bcca1dfa13bc88515e966c688b3f57739a7d50783f5f2fb1d1553397e8112" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.608539 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qh5mr"] Dec 05 10:41:35 crc kubenswrapper[4796]: W1205 10:41:35.640126 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04e95ca1_d131_4528_baaf_0be6b98a5edf.slice/crio-79b9cfebc3f4e1bf5afe11b73bc9b634932f2a64d06ff4ca7847a224c7599d91 WatchSource:0}: Error finding container 79b9cfebc3f4e1bf5afe11b73bc9b634932f2a64d06ff4ca7847a224c7599d91: Status 404 returned error can't find the container with id 79b9cfebc3f4e1bf5afe11b73bc9b634932f2a64d06ff4ca7847a224c7599d91 Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.645669 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-scripts\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.645731 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.645753 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl95b\" (UniqueName: \"kubernetes.io/projected/3481f623-3439-4c17-ab95-7bf31e8fa3a0-kube-api-access-tl95b\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.645784 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.645812 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.645827 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.645854 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.645872 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3481f623-3439-4c17-ab95-7bf31e8fa3a0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.645898 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.645926 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-logs\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.645951 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.645972 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrwjq\" (UniqueName: \"kubernetes.io/projected/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-kube-api-access-mrwjq\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.645999 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.646025 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3481f623-3439-4c17-ab95-7bf31e8fa3a0-logs\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.646043 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-config-data\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.646059 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.646377 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.646525 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3481f623-3439-4c17-ab95-7bf31e8fa3a0-logs\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.647036 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-46e3-account-create-4b4vf"] Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.648084 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3481f623-3439-4c17-ab95-7bf31e8fa3a0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.650020 4796 scope.go:117] "RemoveContainer" containerID="be778e2ee71b23f02bfc4be6165a7a41c838d9732dd68b01738ea3d2842e410a" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.652012 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78ddb58-f7j44"] Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.653578 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-scripts\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.655408 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.657553 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-config-data\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.660129 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.677956 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.691435 4796 scope.go:117] "RemoveContainer" containerID="2c8af599a309e4cde1de3ef06f4e4f8241cf65c003a248f531baffe6738311a5" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.692453 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7ddql"] Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.702035 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7ddql"] Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.737109 4796 scope.go:117] "RemoveContainer" containerID="ba66d9f478790115d2350defabdf10cc4573891262fb46a84feb943043315e92" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.748023 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.748081 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-logs\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.748108 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.748130 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrwjq\" (UniqueName: \"kubernetes.io/projected/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-kube-api-access-mrwjq\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.748154 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.748220 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl95b\" (UniqueName: \"kubernetes.io/projected/3481f623-3439-4c17-ab95-7bf31e8fa3a0-kube-api-access-tl95b\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.748246 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.748270 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.748285 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.748292 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.748736 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.748915 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-logs\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.752610 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.753597 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.755341 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.765959 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrwjq\" (UniqueName: \"kubernetes.io/projected/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-kube-api-access-mrwjq\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.767371 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.771507 4796 scope.go:117] "RemoveContainer" containerID="a100e0dbeb0419045eeedd96547edb2b013ab8cab5835652b907232f44cfd5c9" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.774385 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl95b\" (UniqueName: \"kubernetes.io/projected/3481f623-3439-4c17-ab95-7bf31e8fa3a0-kube-api-access-tl95b\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.807139 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " pod="openstack/glance-default-external-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.824759 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5" (UID: "885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.850457 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.850912 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.851318 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2fftm"] Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.858164 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69fdddc9b6-2ckhp"] Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.867827 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5" (UID: "885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.869104 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-config" (OuterVolumeSpecName: "config") pod "885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5" (UID: "885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.869812 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5" (UID: "885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.879027 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5" (UID: "885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.952641 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.952695 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.952706 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:35 crc kubenswrapper[4796]: I1205 10:41:35.952716 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.039636 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c80cf99-cad2-41a5-bda0-eb1c8334da35" path="/var/lib/kubelet/pods/2c80cf99-cad2-41a5-bda0-eb1c8334da35/volumes" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.040396 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c607361-8750-4197-ab5d-071d2d63ba1f" path="/var/lib/kubelet/pods/4c607361-8750-4197-ab5d-071d2d63ba1f/volumes" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.041213 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="684ee16c-6c42-46e9-b629-87cdff8b3076" path="/var/lib/kubelet/pods/684ee16c-6c42-46e9-b629-87cdff8b3076/volumes" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.071747 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864b648dc7-zjf7j"] Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.076654 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-864b648dc7-zjf7j"] Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.079057 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.099147 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dfsrl"] Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.104851 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dfsrl"] Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.121656 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.217776 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-qx4tb"] Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.218899 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qx4tb" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.223039 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.223216 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.223330 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.223431 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9thmp" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.225274 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qx4tb"] Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.363144 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-combined-ca-bundle\") pod \"keystone-bootstrap-qx4tb\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " pod="openstack/keystone-bootstrap-qx4tb" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.363648 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-scripts\") pod \"keystone-bootstrap-qx4tb\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " pod="openstack/keystone-bootstrap-qx4tb" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.363799 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p66d5\" (UniqueName: \"kubernetes.io/projected/af696b32-cadf-4959-9c28-21804524ce8b-kube-api-access-p66d5\") pod \"keystone-bootstrap-qx4tb\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " pod="openstack/keystone-bootstrap-qx4tb" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.363941 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-config-data\") pod \"keystone-bootstrap-qx4tb\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " pod="openstack/keystone-bootstrap-qx4tb" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.363964 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-fernet-keys\") pod \"keystone-bootstrap-qx4tb\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " pod="openstack/keystone-bootstrap-qx4tb" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.364014 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-credential-keys\") pod \"keystone-bootstrap-qx4tb\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " pod="openstack/keystone-bootstrap-qx4tb" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.428115 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01d9c19c-9758-4d61-8a0d-53868923bfea","Type":"ContainerStarted","Data":"186ef79ea79d9794349c468ee4c95e9aaa2df944f728755ad975f9f87a71034c"} Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.432658 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2fftm" event={"ID":"b644d420-f868-46a7-9e03-1d6fc4f78894","Type":"ContainerStarted","Data":"6a23d423c17c73fc4ae74d6a8417f99c8fd56badf8a9c8f338677a1b4a69a2c5"} Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.436434 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qh5mr" event={"ID":"04e95ca1-d131-4528-baaf-0be6b98a5edf","Type":"ContainerStarted","Data":"79b9cfebc3f4e1bf5afe11b73bc9b634932f2a64d06ff4ca7847a224c7599d91"} Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.445421 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2mbjb" event={"ID":"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6","Type":"ContainerStarted","Data":"400d6ee6c59313c7030142b9e3451c7046a5db8f67c0a072693dfe1ddf2596f3"} Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.457616 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75f565c8bf-4qfx5" event={"ID":"e9dd19b1-8fb3-439c-80e1-126c13ca90da","Type":"ContainerStarted","Data":"abc2e68f17f74c86686a448eacd7b2c1a7222548b9997a3839cc07751f057cbf"} Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.457676 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75f565c8bf-4qfx5" event={"ID":"e9dd19b1-8fb3-439c-80e1-126c13ca90da","Type":"ContainerStarted","Data":"6581a8647abd7af5e40850e6db8eb84d5aaa95fc84edd466907731e4f75fbb6f"} Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.457871 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-75f565c8bf-4qfx5" podUID="e9dd19b1-8fb3-439c-80e1-126c13ca90da" containerName="horizon-log" containerID="cri-o://6581a8647abd7af5e40850e6db8eb84d5aaa95fc84edd466907731e4f75fbb6f" gracePeriod=30 Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.457982 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-75f565c8bf-4qfx5" podUID="e9dd19b1-8fb3-439c-80e1-126c13ca90da" containerName="horizon" containerID="cri-o://abc2e68f17f74c86686a448eacd7b2c1a7222548b9997a3839cc07751f057cbf" gracePeriod=30 Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.465137 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-config-data\") pod \"keystone-bootstrap-qx4tb\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " pod="openstack/keystone-bootstrap-qx4tb" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.465174 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-fernet-keys\") pod \"keystone-bootstrap-qx4tb\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " pod="openstack/keystone-bootstrap-qx4tb" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.465251 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-credential-keys\") pod \"keystone-bootstrap-qx4tb\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " pod="openstack/keystone-bootstrap-qx4tb" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.465882 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-combined-ca-bundle\") pod \"keystone-bootstrap-qx4tb\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " pod="openstack/keystone-bootstrap-qx4tb" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.465921 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-scripts\") pod \"keystone-bootstrap-qx4tb\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " pod="openstack/keystone-bootstrap-qx4tb" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.465955 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p66d5\" (UniqueName: \"kubernetes.io/projected/af696b32-cadf-4959-9c28-21804524ce8b-kube-api-access-p66d5\") pod \"keystone-bootstrap-qx4tb\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " pod="openstack/keystone-bootstrap-qx4tb" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.466067 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c85cf4b9c-c294z" event={"ID":"c1c11605-37fa-4897-9583-2244b3de20c1","Type":"ContainerStarted","Data":"8d3ed8886e48d0937c93a06291a691bdaed8387c4ffbf9ab4f049f5f350aacb4"} Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.466101 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c85cf4b9c-c294z" event={"ID":"c1c11605-37fa-4897-9583-2244b3de20c1","Type":"ContainerStarted","Data":"20a59c54d7fa14a46c2ec3e2c2604bd380acd900ec4e0dcfcb35b4bf823514f1"} Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.466239 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c85cf4b9c-c294z" podUID="c1c11605-37fa-4897-9583-2244b3de20c1" containerName="horizon-log" containerID="cri-o://20a59c54d7fa14a46c2ec3e2c2604bd380acd900ec4e0dcfcb35b4bf823514f1" gracePeriod=30 Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.466349 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5c85cf4b9c-c294z" podUID="c1c11605-37fa-4897-9583-2244b3de20c1" containerName="horizon" containerID="cri-o://8d3ed8886e48d0937c93a06291a691bdaed8387c4ffbf9ab4f049f5f350aacb4" gracePeriod=30 Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.473994 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-2mbjb" podStartSLOduration=3.014942773 podStartE2EDuration="16.473942305s" podCreationTimestamp="2025-12-05 10:41:20 +0000 UTC" firstStartedPulling="2025-12-05 10:41:21.555484683 +0000 UTC m=+827.843590197" lastFinishedPulling="2025-12-05 10:41:35.014484215 +0000 UTC m=+841.302589729" observedRunningTime="2025-12-05 10:41:36.46766742 +0000 UTC m=+842.755772933" watchObservedRunningTime="2025-12-05 10:41:36.473942305 +0000 UTC m=+842.762047819" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.478426 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-credential-keys\") pod \"keystone-bootstrap-qx4tb\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " pod="openstack/keystone-bootstrap-qx4tb" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.479559 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-config-data\") pod \"keystone-bootstrap-qx4tb\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " pod="openstack/keystone-bootstrap-qx4tb" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.480259 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8549c6756f-mnmb6" event={"ID":"611e3579-a9f1-409e-9d3a-071a436916fd","Type":"ContainerStarted","Data":"e4d7293a379e60bcb7cc7b73d508695005527a736fc8f795f52f71a3ef1c47b7"} Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.480318 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8549c6756f-mnmb6" event={"ID":"611e3579-a9f1-409e-9d3a-071a436916fd","Type":"ContainerStarted","Data":"aca2ca046aa59b204780b31a82bc88bd01a4ad9f53f2bfc1cc707610a214f7e4"} Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.480578 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8549c6756f-mnmb6" podUID="611e3579-a9f1-409e-9d3a-071a436916fd" containerName="horizon-log" containerID="cri-o://aca2ca046aa59b204780b31a82bc88bd01a4ad9f53f2bfc1cc707610a214f7e4" gracePeriod=30 Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.480631 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-8549c6756f-mnmb6" podUID="611e3579-a9f1-409e-9d3a-071a436916fd" containerName="horizon" containerID="cri-o://e4d7293a379e60bcb7cc7b73d508695005527a736fc8f795f52f71a3ef1c47b7" gracePeriod=30 Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.481840 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-scripts\") pod \"keystone-bootstrap-qx4tb\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " pod="openstack/keystone-bootstrap-qx4tb" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.482562 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-combined-ca-bundle\") pod \"keystone-bootstrap-qx4tb\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " pod="openstack/keystone-bootstrap-qx4tb" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.483746 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-75f565c8bf-4qfx5" podStartSLOduration=11.937486175 podStartE2EDuration="12.483735099s" podCreationTimestamp="2025-12-05 10:41:24 +0000 UTC" firstStartedPulling="2025-12-05 10:41:34.922235484 +0000 UTC m=+841.210340997" lastFinishedPulling="2025-12-05 10:41:35.468484408 +0000 UTC m=+841.756589921" observedRunningTime="2025-12-05 10:41:36.483557816 +0000 UTC m=+842.771663329" watchObservedRunningTime="2025-12-05 10:41:36.483735099 +0000 UTC m=+842.771840612" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.488782 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p66d5\" (UniqueName: \"kubernetes.io/projected/af696b32-cadf-4959-9c28-21804524ce8b-kube-api-access-p66d5\") pod \"keystone-bootstrap-qx4tb\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " pod="openstack/keystone-bootstrap-qx4tb" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.494076 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-fernet-keys\") pod \"keystone-bootstrap-qx4tb\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " pod="openstack/keystone-bootstrap-qx4tb" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.495435 4796 generic.go:334] "Generic (PLEG): container finished" podID="2c3e7b5f-0297-41f7-b4c6-0ac3ddb91156" containerID="d2a7c7a81c273c72ab77d7542bb0d9da5a813aaa32bab8cc5356a140628e6fce" exitCode=0 Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.495667 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-46e3-account-create-4b4vf" event={"ID":"2c3e7b5f-0297-41f7-b4c6-0ac3ddb91156","Type":"ContainerDied","Data":"d2a7c7a81c273c72ab77d7542bb0d9da5a813aaa32bab8cc5356a140628e6fce"} Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.495704 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-46e3-account-create-4b4vf" event={"ID":"2c3e7b5f-0297-41f7-b4c6-0ac3ddb91156","Type":"ContainerStarted","Data":"352f66365f391f4a636fdc8c4ad6100288b0ea901d7f84daa0d7864db437d0e6"} Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.505197 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78ddb58-f7j44" event={"ID":"05623376-2343-40fb-a4df-508ce1e333e2","Type":"ContainerStarted","Data":"1f7ce385b510d1d3ec082d014bf3db715dfd7ae5563eda657a8e25b421929fab"} Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.505232 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78ddb58-f7j44" event={"ID":"05623376-2343-40fb-a4df-508ce1e333e2","Type":"ContainerStarted","Data":"493bba8700a2d5b4ed207998289ab6f6830ccbe8c542f2ab8696aae732832b66"} Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.507869 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5c85cf4b9c-c294z" podStartSLOduration=2.733327126 podStartE2EDuration="16.507859455s" podCreationTimestamp="2025-12-05 10:41:20 +0000 UTC" firstStartedPulling="2025-12-05 10:41:21.322278632 +0000 UTC m=+827.610384145" lastFinishedPulling="2025-12-05 10:41:35.096810961 +0000 UTC m=+841.384916474" observedRunningTime="2025-12-05 10:41:36.501708623 +0000 UTC m=+842.789814136" watchObservedRunningTime="2025-12-05 10:41:36.507859455 +0000 UTC m=+842.795964969" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.539598 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69fdddc9b6-2ckhp" event={"ID":"fdca92fe-39ad-41e9-978b-1757290eee03","Type":"ContainerStarted","Data":"015efb0f191edca8f32d554f7c90a5e7333f1ac4f7dc4f0419abd600758422d5"} Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.539645 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69fdddc9b6-2ckhp" event={"ID":"fdca92fe-39ad-41e9-978b-1757290eee03","Type":"ContainerStarted","Data":"fd3e0acef55e43bba925cd76104dd9c7cd665130af9566e803f5898ef89e4f5c"} Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.544781 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-8549c6756f-mnmb6" podStartSLOduration=2.796492492 podStartE2EDuration="16.54475543s" podCreationTimestamp="2025-12-05 10:41:20 +0000 UTC" firstStartedPulling="2025-12-05 10:41:21.313961024 +0000 UTC m=+827.602066536" lastFinishedPulling="2025-12-05 10:41:35.062223961 +0000 UTC m=+841.350329474" observedRunningTime="2025-12-05 10:41:36.5283737 +0000 UTC m=+842.816479213" watchObservedRunningTime="2025-12-05 10:41:36.54475543 +0000 UTC m=+842.832860943" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.580174 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-69fdddc9b6-2ckhp" podStartSLOduration=6.580150119 podStartE2EDuration="6.580150119s" podCreationTimestamp="2025-12-05 10:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:41:36.579838223 +0000 UTC m=+842.867943756" watchObservedRunningTime="2025-12-05 10:41:36.580150119 +0000 UTC m=+842.868255632" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.601832 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-78ddb58-f7j44" podStartSLOduration=6.601809058 podStartE2EDuration="6.601809058s" podCreationTimestamp="2025-12-05 10:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:41:36.598169109 +0000 UTC m=+842.886274622" watchObservedRunningTime="2025-12-05 10:41:36.601809058 +0000 UTC m=+842.889914561" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.635400 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qx4tb" Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.678301 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 10:41:36 crc kubenswrapper[4796]: I1205 10:41:36.766755 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 10:41:37 crc kubenswrapper[4796]: I1205 10:41:37.154491 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qx4tb"] Dec 05 10:41:37 crc kubenswrapper[4796]: I1205 10:41:37.550491 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qx4tb" event={"ID":"af696b32-cadf-4959-9c28-21804524ce8b","Type":"ContainerStarted","Data":"c7a5086aa98121866d5c5433b7f17b603bc95e3f6ef15f907b176cd2da32b1ee"} Dec 05 10:41:37 crc kubenswrapper[4796]: I1205 10:41:37.550904 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qx4tb" event={"ID":"af696b32-cadf-4959-9c28-21804524ce8b","Type":"ContainerStarted","Data":"f93f1da3893500b536522b4e555ce197e877fa242c2b7a567c97e302788852f2"} Dec 05 10:41:37 crc kubenswrapper[4796]: I1205 10:41:37.558476 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3481f623-3439-4c17-ab95-7bf31e8fa3a0","Type":"ContainerStarted","Data":"efd6c98a037dee2fc9d53df24d6f2199eb9527a02cf9f7d0cb221a4a61e36963"} Dec 05 10:41:37 crc kubenswrapper[4796]: I1205 10:41:37.558512 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3481f623-3439-4c17-ab95-7bf31e8fa3a0","Type":"ContainerStarted","Data":"aecba52a3639b9798b9a862ed57ce6c9b7569e894e54abd026998a4bef3fb801"} Dec 05 10:41:37 crc kubenswrapper[4796]: I1205 10:41:37.561957 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0","Type":"ContainerStarted","Data":"34d67ffb28f99a71a90f3c7a975e1f0870e9f1f0a35f292eb161abbf8b782cfa"} Dec 05 10:41:37 crc kubenswrapper[4796]: I1205 10:41:37.569867 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69fdddc9b6-2ckhp" event={"ID":"fdca92fe-39ad-41e9-978b-1757290eee03","Type":"ContainerStarted","Data":"2c7054840b5c4d67e55e14bcd05d7db816598f2a9fee7b6e51d65a69305a3d62"} Dec 05 10:41:37 crc kubenswrapper[4796]: I1205 10:41:37.571105 4796 generic.go:334] "Generic (PLEG): container finished" podID="8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6" containerID="400d6ee6c59313c7030142b9e3451c7046a5db8f67c0a072693dfe1ddf2596f3" exitCode=0 Dec 05 10:41:37 crc kubenswrapper[4796]: I1205 10:41:37.571147 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2mbjb" event={"ID":"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6","Type":"ContainerDied","Data":"400d6ee6c59313c7030142b9e3451c7046a5db8f67c0a072693dfe1ddf2596f3"} Dec 05 10:41:37 crc kubenswrapper[4796]: I1205 10:41:37.572743 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78ddb58-f7j44" event={"ID":"05623376-2343-40fb-a4df-508ce1e333e2","Type":"ContainerStarted","Data":"8c79603913e2e99320f16623ccc5a0baf99de97a7bda31d939e43334b7d71b81"} Dec 05 10:41:37 crc kubenswrapper[4796]: I1205 10:41:37.574992 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-qx4tb" podStartSLOduration=1.5749827669999998 podStartE2EDuration="1.574982767s" podCreationTimestamp="2025-12-05 10:41:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:41:37.562348747 +0000 UTC m=+843.850454260" watchObservedRunningTime="2025-12-05 10:41:37.574982767 +0000 UTC m=+843.863088281" Dec 05 10:41:38 crc kubenswrapper[4796]: I1205 10:41:38.039644 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5" path="/var/lib/kubelet/pods/885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5/volumes" Dec 05 10:41:38 crc kubenswrapper[4796]: I1205 10:41:38.040640 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf200531-c073-4d03-916a-cb3f54c5aa89" path="/var/lib/kubelet/pods/cf200531-c073-4d03-916a-cb3f54c5aa89/volumes" Dec 05 10:41:38 crc kubenswrapper[4796]: I1205 10:41:38.600290 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0","Type":"ContainerStarted","Data":"f4339d37be037c60dc834a791ce9bf16e462f20b7fea0a6894e832156318fb00"} Dec 05 10:41:38 crc kubenswrapper[4796]: I1205 10:41:38.607599 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-46e3-account-create-4b4vf" event={"ID":"2c3e7b5f-0297-41f7-b4c6-0ac3ddb91156","Type":"ContainerDied","Data":"352f66365f391f4a636fdc8c4ad6100288b0ea901d7f84daa0d7864db437d0e6"} Dec 05 10:41:38 crc kubenswrapper[4796]: I1205 10:41:38.607629 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="352f66365f391f4a636fdc8c4ad6100288b0ea901d7f84daa0d7864db437d0e6" Dec 05 10:41:38 crc kubenswrapper[4796]: I1205 10:41:38.688064 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-46e3-account-create-4b4vf" Dec 05 10:41:38 crc kubenswrapper[4796]: I1205 10:41:38.711560 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zkrb\" (UniqueName: \"kubernetes.io/projected/2c3e7b5f-0297-41f7-b4c6-0ac3ddb91156-kube-api-access-6zkrb\") pod \"2c3e7b5f-0297-41f7-b4c6-0ac3ddb91156\" (UID: \"2c3e7b5f-0297-41f7-b4c6-0ac3ddb91156\") " Dec 05 10:41:38 crc kubenswrapper[4796]: I1205 10:41:38.716301 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c3e7b5f-0297-41f7-b4c6-0ac3ddb91156-kube-api-access-6zkrb" (OuterVolumeSpecName: "kube-api-access-6zkrb") pod "2c3e7b5f-0297-41f7-b4c6-0ac3ddb91156" (UID: "2c3e7b5f-0297-41f7-b4c6-0ac3ddb91156"). InnerVolumeSpecName "kube-api-access-6zkrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:41:38 crc kubenswrapper[4796]: I1205 10:41:38.822179 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zkrb\" (UniqueName: \"kubernetes.io/projected/2c3e7b5f-0297-41f7-b4c6-0ac3ddb91156-kube-api-access-6zkrb\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:39 crc kubenswrapper[4796]: I1205 10:41:39.618223 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0","Type":"ContainerStarted","Data":"db5df645b78e6a38648826fbac5189c0614e92d15fc2d3518c6a53652a9a10ac"} Dec 05 10:41:39 crc kubenswrapper[4796]: I1205 10:41:39.631523 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01d9c19c-9758-4d61-8a0d-53868923bfea","Type":"ContainerStarted","Data":"31c2a93ccbce05c86cfaf88da621386eded154a39766c613e85e54c6d48448d6"} Dec 05 10:41:39 crc kubenswrapper[4796]: I1205 10:41:39.633120 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-46e3-account-create-4b4vf" Dec 05 10:41:39 crc kubenswrapper[4796]: I1205 10:41:39.633878 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3481f623-3439-4c17-ab95-7bf31e8fa3a0","Type":"ContainerStarted","Data":"53ead197645d5a519dff53b274a05f3c7fae717047ce888937387d43d99fdf00"} Dec 05 10:41:39 crc kubenswrapper[4796]: I1205 10:41:39.673289 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.673273626 podStartE2EDuration="4.673273626s" podCreationTimestamp="2025-12-05 10:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:41:39.65309917 +0000 UTC m=+845.941204683" watchObservedRunningTime="2025-12-05 10:41:39.673273626 +0000 UTC m=+845.961379140" Dec 05 10:41:39 crc kubenswrapper[4796]: I1205 10:41:39.676522 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.67650716 podStartE2EDuration="4.67650716s" podCreationTimestamp="2025-12-05 10:41:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:41:39.670850688 +0000 UTC m=+845.958956201" watchObservedRunningTime="2025-12-05 10:41:39.67650716 +0000 UTC m=+845.964612673" Dec 05 10:41:40 crc kubenswrapper[4796]: I1205 10:41:40.319836 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-864b648dc7-zjf7j" podUID="885ab35e-9f3a-43b2-8c03-5d5d9f6af4b5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.120:5353: i/o timeout" Dec 05 10:41:40 crc kubenswrapper[4796]: I1205 10:41:40.552066 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:40 crc kubenswrapper[4796]: I1205 10:41:40.552122 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:41:40 crc kubenswrapper[4796]: I1205 10:41:40.601575 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:40 crc kubenswrapper[4796]: I1205 10:41:40.601891 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:41:40 crc kubenswrapper[4796]: I1205 10:41:40.644359 4796 generic.go:334] "Generic (PLEG): container finished" podID="af696b32-cadf-4959-9c28-21804524ce8b" containerID="c7a5086aa98121866d5c5433b7f17b603bc95e3f6ef15f907b176cd2da32b1ee" exitCode=0 Dec 05 10:41:40 crc kubenswrapper[4796]: I1205 10:41:40.644414 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qx4tb" event={"ID":"af696b32-cadf-4959-9c28-21804524ce8b","Type":"ContainerDied","Data":"c7a5086aa98121866d5c5433b7f17b603bc95e3f6ef15f907b176cd2da32b1ee"} Dec 05 10:41:40 crc kubenswrapper[4796]: I1205 10:41:40.654512 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8549c6756f-mnmb6" Dec 05 10:41:40 crc kubenswrapper[4796]: I1205 10:41:40.895615 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5c85cf4b9c-c294z" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.172121 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-gqtd9"] Dec 05 10:41:42 crc kubenswrapper[4796]: E1205 10:41:42.173074 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c3e7b5f-0297-41f7-b4c6-0ac3ddb91156" containerName="mariadb-account-create" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.173091 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c3e7b5f-0297-41f7-b4c6-0ac3ddb91156" containerName="mariadb-account-create" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.173731 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c3e7b5f-0297-41f7-b4c6-0ac3ddb91156" containerName="mariadb-account-create" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.174623 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gqtd9" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.191496 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.191812 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-277hs" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.191933 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.205737 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gqtd9"] Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.273869 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2mbjb" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.294224 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qx4tb" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.299982 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv2tk\" (UniqueName: \"kubernetes.io/projected/d2395df9-6a76-405e-b346-ee634afb272c-kube-api-access-sv2tk\") pod \"neutron-db-sync-gqtd9\" (UID: \"d2395df9-6a76-405e-b346-ee634afb272c\") " pod="openstack/neutron-db-sync-gqtd9" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.300036 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2395df9-6a76-405e-b346-ee634afb272c-combined-ca-bundle\") pod \"neutron-db-sync-gqtd9\" (UID: \"d2395df9-6a76-405e-b346-ee634afb272c\") " pod="openstack/neutron-db-sync-gqtd9" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.300085 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2395df9-6a76-405e-b346-ee634afb272c-config\") pod \"neutron-db-sync-gqtd9\" (UID: \"d2395df9-6a76-405e-b346-ee634afb272c\") " pod="openstack/neutron-db-sync-gqtd9" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.401966 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qssjj\" (UniqueName: \"kubernetes.io/projected/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-kube-api-access-qssjj\") pod \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\" (UID: \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\") " Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.402213 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-combined-ca-bundle\") pod \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\" (UID: \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\") " Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.402285 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-scripts\") pod \"af696b32-cadf-4959-9c28-21804524ce8b\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.402301 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-fernet-keys\") pod \"af696b32-cadf-4959-9c28-21804524ce8b\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.402353 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p66d5\" (UniqueName: \"kubernetes.io/projected/af696b32-cadf-4959-9c28-21804524ce8b-kube-api-access-p66d5\") pod \"af696b32-cadf-4959-9c28-21804524ce8b\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.402374 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-config-data\") pod \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\" (UID: \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\") " Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.402478 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-logs\") pod \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\" (UID: \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\") " Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.402523 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-scripts\") pod \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\" (UID: \"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6\") " Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.402548 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-credential-keys\") pod \"af696b32-cadf-4959-9c28-21804524ce8b\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.402710 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-config-data\") pod \"af696b32-cadf-4959-9c28-21804524ce8b\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.402747 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-combined-ca-bundle\") pod \"af696b32-cadf-4959-9c28-21804524ce8b\" (UID: \"af696b32-cadf-4959-9c28-21804524ce8b\") " Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.403012 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2395df9-6a76-405e-b346-ee634afb272c-config\") pod \"neutron-db-sync-gqtd9\" (UID: \"d2395df9-6a76-405e-b346-ee634afb272c\") " pod="openstack/neutron-db-sync-gqtd9" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.403134 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv2tk\" (UniqueName: \"kubernetes.io/projected/d2395df9-6a76-405e-b346-ee634afb272c-kube-api-access-sv2tk\") pod \"neutron-db-sync-gqtd9\" (UID: \"d2395df9-6a76-405e-b346-ee634afb272c\") " pod="openstack/neutron-db-sync-gqtd9" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.403182 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2395df9-6a76-405e-b346-ee634afb272c-combined-ca-bundle\") pod \"neutron-db-sync-gqtd9\" (UID: \"d2395df9-6a76-405e-b346-ee634afb272c\") " pod="openstack/neutron-db-sync-gqtd9" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.404728 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-logs" (OuterVolumeSpecName: "logs") pod "8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6" (UID: "8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.408598 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2395df9-6a76-405e-b346-ee634afb272c-config\") pod \"neutron-db-sync-gqtd9\" (UID: \"d2395df9-6a76-405e-b346-ee634afb272c\") " pod="openstack/neutron-db-sync-gqtd9" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.409854 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "af696b32-cadf-4959-9c28-21804524ce8b" (UID: "af696b32-cadf-4959-9c28-21804524ce8b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.412806 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-scripts" (OuterVolumeSpecName: "scripts") pod "af696b32-cadf-4959-9c28-21804524ce8b" (UID: "af696b32-cadf-4959-9c28-21804524ce8b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.414879 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-kube-api-access-qssjj" (OuterVolumeSpecName: "kube-api-access-qssjj") pod "8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6" (UID: "8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6"). InnerVolumeSpecName "kube-api-access-qssjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.416383 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2395df9-6a76-405e-b346-ee634afb272c-combined-ca-bundle\") pod \"neutron-db-sync-gqtd9\" (UID: \"d2395df9-6a76-405e-b346-ee634afb272c\") " pod="openstack/neutron-db-sync-gqtd9" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.417427 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv2tk\" (UniqueName: \"kubernetes.io/projected/d2395df9-6a76-405e-b346-ee634afb272c-kube-api-access-sv2tk\") pod \"neutron-db-sync-gqtd9\" (UID: \"d2395df9-6a76-405e-b346-ee634afb272c\") " pod="openstack/neutron-db-sync-gqtd9" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.417952 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-scripts" (OuterVolumeSpecName: "scripts") pod "8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6" (UID: "8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.418404 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af696b32-cadf-4959-9c28-21804524ce8b-kube-api-access-p66d5" (OuterVolumeSpecName: "kube-api-access-p66d5") pod "af696b32-cadf-4959-9c28-21804524ce8b" (UID: "af696b32-cadf-4959-9c28-21804524ce8b"). InnerVolumeSpecName "kube-api-access-p66d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.418782 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "af696b32-cadf-4959-9c28-21804524ce8b" (UID: "af696b32-cadf-4959-9c28-21804524ce8b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.440507 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6" (UID: "8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.469039 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af696b32-cadf-4959-9c28-21804524ce8b" (UID: "af696b32-cadf-4959-9c28-21804524ce8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.473965 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-config-data" (OuterVolumeSpecName: "config-data") pod "af696b32-cadf-4959-9c28-21804524ce8b" (UID: "af696b32-cadf-4959-9c28-21804524ce8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.489843 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-config-data" (OuterVolumeSpecName: "config-data") pod "8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6" (UID: "8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.504968 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-logs\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.504995 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.505006 4796 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.505017 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.505026 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.505037 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qssjj\" (UniqueName: \"kubernetes.io/projected/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-kube-api-access-qssjj\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.505045 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.505061 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.505070 4796 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/af696b32-cadf-4959-9c28-21804524ce8b-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.505078 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p66d5\" (UniqueName: \"kubernetes.io/projected/af696b32-cadf-4959-9c28-21804524ce8b-kube-api-access-p66d5\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.505086 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.630561 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gqtd9" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.692968 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qx4tb" event={"ID":"af696b32-cadf-4959-9c28-21804524ce8b","Type":"ContainerDied","Data":"f93f1da3893500b536522b4e555ce197e877fa242c2b7a567c97e302788852f2"} Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.693142 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f93f1da3893500b536522b4e555ce197e877fa242c2b7a567c97e302788852f2" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.693287 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qx4tb" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.705341 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2fftm" event={"ID":"b644d420-f868-46a7-9e03-1d6fc4f78894","Type":"ContainerStarted","Data":"b5ca45481fee61e47936682215a8abf913a4be10b620f8e09d6b06b44dc0161d"} Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.730098 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-2fftm" podStartSLOduration=4.390501942 podStartE2EDuration="10.730078249s" podCreationTimestamp="2025-12-05 10:41:32 +0000 UTC" firstStartedPulling="2025-12-05 10:41:35.858252388 +0000 UTC m=+842.146357901" lastFinishedPulling="2025-12-05 10:41:42.197828696 +0000 UTC m=+848.485934208" observedRunningTime="2025-12-05 10:41:42.724447255 +0000 UTC m=+849.012552768" watchObservedRunningTime="2025-12-05 10:41:42.730078249 +0000 UTC m=+849.018183762" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.732883 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2mbjb" event={"ID":"8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6","Type":"ContainerDied","Data":"60ae0e444b3a3cc360833bb5378aa6878409597a356cd48bbdbef86f966f1372"} Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.733512 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60ae0e444b3a3cc360833bb5378aa6878409597a356cd48bbdbef86f966f1372" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.733095 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2mbjb" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.759979 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5dbb66bc57-9lclx"] Dec 05 10:41:42 crc kubenswrapper[4796]: E1205 10:41:42.760467 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6" containerName="placement-db-sync" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.760490 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6" containerName="placement-db-sync" Dec 05 10:41:42 crc kubenswrapper[4796]: E1205 10:41:42.760529 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af696b32-cadf-4959-9c28-21804524ce8b" containerName="keystone-bootstrap" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.760535 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="af696b32-cadf-4959-9c28-21804524ce8b" containerName="keystone-bootstrap" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.760725 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6" containerName="placement-db-sync" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.760741 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="af696b32-cadf-4959-9c28-21804524ce8b" containerName="keystone-bootstrap" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.761379 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.766120 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.766399 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-9thmp" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.766523 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.766628 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.766884 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.767001 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.791535 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5dbb66bc57-9lclx"] Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.920951 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-credential-keys\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.921211 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-scripts\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.921236 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-public-tls-certs\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.921261 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25vdr\" (UniqueName: \"kubernetes.io/projected/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-kube-api-access-25vdr\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.921304 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-config-data\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.921334 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-fernet-keys\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.921387 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-internal-tls-certs\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.921406 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-combined-ca-bundle\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:42 crc kubenswrapper[4796]: E1205 10:41:42.962962 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b1c27bb_58c0_4631_ba14_f1cddf9ecdf6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf696b32_cadf_4959_9c28_21804524ce8b.slice/crio-f93f1da3893500b536522b4e555ce197e877fa242c2b7a567c97e302788852f2\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b1c27bb_58c0_4631_ba14_f1cddf9ecdf6.slice/crio-60ae0e444b3a3cc360833bb5378aa6878409597a356cd48bbdbef86f966f1372\": RecentStats: unable to find data in memory cache]" Dec 05 10:41:42 crc kubenswrapper[4796]: I1205 10:41:42.995142 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-gqtd9"] Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.022834 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-fernet-keys\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.022948 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-internal-tls-certs\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.022977 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-combined-ca-bundle\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.023083 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-credential-keys\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.023107 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-scripts\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.023130 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-public-tls-certs\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.023151 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25vdr\" (UniqueName: \"kubernetes.io/projected/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-kube-api-access-25vdr\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.023216 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-config-data\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.025793 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-credential-keys\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.028093 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-internal-tls-certs\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.030525 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-config-data\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.030622 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-fernet-keys\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.030979 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-scripts\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.031412 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-combined-ca-bundle\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.032528 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-public-tls-certs\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.038093 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25vdr\" (UniqueName: \"kubernetes.io/projected/e6b0a08e-b4c3-4947-9c4e-67a863d92dca-kube-api-access-25vdr\") pod \"keystone-5dbb66bc57-9lclx\" (UID: \"e6b0a08e-b4c3-4947-9c4e-67a863d92dca\") " pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.102923 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.451379 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-644c454648-8vjkb"] Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.452984 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.455065 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.455361 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.455722 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.456000 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-wn2rf" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.456096 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.471601 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-644c454648-8vjkb"] Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.536612 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f219a85-81d2-4337-bed6-507debdb79dd-public-tls-certs\") pod \"placement-644c454648-8vjkb\" (UID: \"8f219a85-81d2-4337-bed6-507debdb79dd\") " pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.536729 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzv5x\" (UniqueName: \"kubernetes.io/projected/8f219a85-81d2-4337-bed6-507debdb79dd-kube-api-access-zzv5x\") pod \"placement-644c454648-8vjkb\" (UID: \"8f219a85-81d2-4337-bed6-507debdb79dd\") " pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.536776 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f219a85-81d2-4337-bed6-507debdb79dd-logs\") pod \"placement-644c454648-8vjkb\" (UID: \"8f219a85-81d2-4337-bed6-507debdb79dd\") " pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.536797 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f219a85-81d2-4337-bed6-507debdb79dd-config-data\") pod \"placement-644c454648-8vjkb\" (UID: \"8f219a85-81d2-4337-bed6-507debdb79dd\") " pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.536953 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f219a85-81d2-4337-bed6-507debdb79dd-combined-ca-bundle\") pod \"placement-644c454648-8vjkb\" (UID: \"8f219a85-81d2-4337-bed6-507debdb79dd\") " pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.536972 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f219a85-81d2-4337-bed6-507debdb79dd-internal-tls-certs\") pod \"placement-644c454648-8vjkb\" (UID: \"8f219a85-81d2-4337-bed6-507debdb79dd\") " pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.537020 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f219a85-81d2-4337-bed6-507debdb79dd-scripts\") pod \"placement-644c454648-8vjkb\" (UID: \"8f219a85-81d2-4337-bed6-507debdb79dd\") " pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.622860 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5dbb66bc57-9lclx"] Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.644613 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f219a85-81d2-4337-bed6-507debdb79dd-combined-ca-bundle\") pod \"placement-644c454648-8vjkb\" (UID: \"8f219a85-81d2-4337-bed6-507debdb79dd\") " pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.645465 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f219a85-81d2-4337-bed6-507debdb79dd-internal-tls-certs\") pod \"placement-644c454648-8vjkb\" (UID: \"8f219a85-81d2-4337-bed6-507debdb79dd\") " pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.645571 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f219a85-81d2-4337-bed6-507debdb79dd-scripts\") pod \"placement-644c454648-8vjkb\" (UID: \"8f219a85-81d2-4337-bed6-507debdb79dd\") " pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.647163 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f219a85-81d2-4337-bed6-507debdb79dd-public-tls-certs\") pod \"placement-644c454648-8vjkb\" (UID: \"8f219a85-81d2-4337-bed6-507debdb79dd\") " pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.647238 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzv5x\" (UniqueName: \"kubernetes.io/projected/8f219a85-81d2-4337-bed6-507debdb79dd-kube-api-access-zzv5x\") pod \"placement-644c454648-8vjkb\" (UID: \"8f219a85-81d2-4337-bed6-507debdb79dd\") " pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.648158 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f219a85-81d2-4337-bed6-507debdb79dd-logs\") pod \"placement-644c454648-8vjkb\" (UID: \"8f219a85-81d2-4337-bed6-507debdb79dd\") " pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.648218 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f219a85-81d2-4337-bed6-507debdb79dd-config-data\") pod \"placement-644c454648-8vjkb\" (UID: \"8f219a85-81d2-4337-bed6-507debdb79dd\") " pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.653522 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f219a85-81d2-4337-bed6-507debdb79dd-internal-tls-certs\") pod \"placement-644c454648-8vjkb\" (UID: \"8f219a85-81d2-4337-bed6-507debdb79dd\") " pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.653826 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f219a85-81d2-4337-bed6-507debdb79dd-logs\") pod \"placement-644c454648-8vjkb\" (UID: \"8f219a85-81d2-4337-bed6-507debdb79dd\") " pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.654123 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f219a85-81d2-4337-bed6-507debdb79dd-combined-ca-bundle\") pod \"placement-644c454648-8vjkb\" (UID: \"8f219a85-81d2-4337-bed6-507debdb79dd\") " pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.655758 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f219a85-81d2-4337-bed6-507debdb79dd-scripts\") pod \"placement-644c454648-8vjkb\" (UID: \"8f219a85-81d2-4337-bed6-507debdb79dd\") " pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.660076 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f219a85-81d2-4337-bed6-507debdb79dd-public-tls-certs\") pod \"placement-644c454648-8vjkb\" (UID: \"8f219a85-81d2-4337-bed6-507debdb79dd\") " pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.661539 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f219a85-81d2-4337-bed6-507debdb79dd-config-data\") pod \"placement-644c454648-8vjkb\" (UID: \"8f219a85-81d2-4337-bed6-507debdb79dd\") " pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.677916 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzv5x\" (UniqueName: \"kubernetes.io/projected/8f219a85-81d2-4337-bed6-507debdb79dd-kube-api-access-zzv5x\") pod \"placement-644c454648-8vjkb\" (UID: \"8f219a85-81d2-4337-bed6-507debdb79dd\") " pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.760129 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gqtd9" event={"ID":"d2395df9-6a76-405e-b346-ee634afb272c","Type":"ContainerStarted","Data":"48ff5b23dafcbc2a56d1da5ed9be272255df80129833f9880f658f5c376a34ab"} Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.760220 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gqtd9" event={"ID":"d2395df9-6a76-405e-b346-ee634afb272c","Type":"ContainerStarted","Data":"c0e546517a639bdf5c28ed3ef242baf1695b76d0d9bc472b1d8126caf174b303"} Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.781223 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5dbb66bc57-9lclx" event={"ID":"e6b0a08e-b4c3-4947-9c4e-67a863d92dca","Type":"ContainerStarted","Data":"045752e9483f5664b0906457c096944130a8fe1cad99572fac594f306475e980"} Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.783623 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-gqtd9" podStartSLOduration=1.783603017 podStartE2EDuration="1.783603017s" podCreationTimestamp="2025-12-05 10:41:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:41:43.775225265 +0000 UTC m=+850.063330778" watchObservedRunningTime="2025-12-05 10:41:43.783603017 +0000 UTC m=+850.071708531" Dec 05 10:41:43 crc kubenswrapper[4796]: I1205 10:41:43.790162 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:44 crc kubenswrapper[4796]: I1205 10:41:44.350794 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-644c454648-8vjkb"] Dec 05 10:41:44 crc kubenswrapper[4796]: I1205 10:41:44.794960 4796 generic.go:334] "Generic (PLEG): container finished" podID="b644d420-f868-46a7-9e03-1d6fc4f78894" containerID="b5ca45481fee61e47936682215a8abf913a4be10b620f8e09d6b06b44dc0161d" exitCode=0 Dec 05 10:41:44 crc kubenswrapper[4796]: I1205 10:41:44.795260 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2fftm" event={"ID":"b644d420-f868-46a7-9e03-1d6fc4f78894","Type":"ContainerDied","Data":"b5ca45481fee61e47936682215a8abf913a4be10b620f8e09d6b06b44dc0161d"} Dec 05 10:41:44 crc kubenswrapper[4796]: I1205 10:41:44.798415 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5dbb66bc57-9lclx" event={"ID":"e6b0a08e-b4c3-4947-9c4e-67a863d92dca","Type":"ContainerStarted","Data":"ae719598aaa931579d64c2592f7c6c6419f6fa79c0d697c67bbb5f1343b4ae87"} Dec 05 10:41:44 crc kubenswrapper[4796]: I1205 10:41:44.799132 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:41:44 crc kubenswrapper[4796]: I1205 10:41:44.802408 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-644c454648-8vjkb" event={"ID":"8f219a85-81d2-4337-bed6-507debdb79dd","Type":"ContainerStarted","Data":"1b0189c6013b773012226a290386be1fafa1b0de57a5a40fe2bfa2115a87fb42"} Dec 05 10:41:44 crc kubenswrapper[4796]: I1205 10:41:44.802435 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-644c454648-8vjkb" event={"ID":"8f219a85-81d2-4337-bed6-507debdb79dd","Type":"ContainerStarted","Data":"6b5d43dc087c1d5a8aebe84e50d66bc99dd87a5512ea57e0392e8aa654ebf51e"} Dec 05 10:41:44 crc kubenswrapper[4796]: I1205 10:41:44.834035 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5dbb66bc57-9lclx" podStartSLOduration=2.834022685 podStartE2EDuration="2.834022685s" podCreationTimestamp="2025-12-05 10:41:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:41:44.825003846 +0000 UTC m=+851.113109359" watchObservedRunningTime="2025-12-05 10:41:44.834022685 +0000 UTC m=+851.122128198" Dec 05 10:41:44 crc kubenswrapper[4796]: I1205 10:41:44.921797 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-75f565c8bf-4qfx5" Dec 05 10:41:45 crc kubenswrapper[4796]: I1205 10:41:45.436234 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pkrsm"] Dec 05 10:41:45 crc kubenswrapper[4796]: I1205 10:41:45.485671 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pkrsm"] Dec 05 10:41:45 crc kubenswrapper[4796]: I1205 10:41:45.485782 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkrsm" Dec 05 10:41:45 crc kubenswrapper[4796]: I1205 10:41:45.593082 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d1fa9e-96e1-44c5-89dd-e2c619890cee-catalog-content\") pod \"redhat-operators-pkrsm\" (UID: \"02d1fa9e-96e1-44c5-89dd-e2c619890cee\") " pod="openshift-marketplace/redhat-operators-pkrsm" Dec 05 10:41:45 crc kubenswrapper[4796]: I1205 10:41:45.593360 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d1fa9e-96e1-44c5-89dd-e2c619890cee-utilities\") pod \"redhat-operators-pkrsm\" (UID: \"02d1fa9e-96e1-44c5-89dd-e2c619890cee\") " pod="openshift-marketplace/redhat-operators-pkrsm" Dec 05 10:41:45 crc kubenswrapper[4796]: I1205 10:41:45.593447 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlzf2\" (UniqueName: \"kubernetes.io/projected/02d1fa9e-96e1-44c5-89dd-e2c619890cee-kube-api-access-mlzf2\") pod \"redhat-operators-pkrsm\" (UID: \"02d1fa9e-96e1-44c5-89dd-e2c619890cee\") " pod="openshift-marketplace/redhat-operators-pkrsm" Dec 05 10:41:45 crc kubenswrapper[4796]: I1205 10:41:45.694731 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d1fa9e-96e1-44c5-89dd-e2c619890cee-catalog-content\") pod \"redhat-operators-pkrsm\" (UID: \"02d1fa9e-96e1-44c5-89dd-e2c619890cee\") " pod="openshift-marketplace/redhat-operators-pkrsm" Dec 05 10:41:45 crc kubenswrapper[4796]: I1205 10:41:45.694811 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d1fa9e-96e1-44c5-89dd-e2c619890cee-utilities\") pod \"redhat-operators-pkrsm\" (UID: \"02d1fa9e-96e1-44c5-89dd-e2c619890cee\") " pod="openshift-marketplace/redhat-operators-pkrsm" Dec 05 10:41:45 crc kubenswrapper[4796]: I1205 10:41:45.694841 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlzf2\" (UniqueName: \"kubernetes.io/projected/02d1fa9e-96e1-44c5-89dd-e2c619890cee-kube-api-access-mlzf2\") pod \"redhat-operators-pkrsm\" (UID: \"02d1fa9e-96e1-44c5-89dd-e2c619890cee\") " pod="openshift-marketplace/redhat-operators-pkrsm" Dec 05 10:41:45 crc kubenswrapper[4796]: I1205 10:41:45.695517 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d1fa9e-96e1-44c5-89dd-e2c619890cee-catalog-content\") pod \"redhat-operators-pkrsm\" (UID: \"02d1fa9e-96e1-44c5-89dd-e2c619890cee\") " pod="openshift-marketplace/redhat-operators-pkrsm" Dec 05 10:41:45 crc kubenswrapper[4796]: I1205 10:41:45.695519 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d1fa9e-96e1-44c5-89dd-e2c619890cee-utilities\") pod \"redhat-operators-pkrsm\" (UID: \"02d1fa9e-96e1-44c5-89dd-e2c619890cee\") " pod="openshift-marketplace/redhat-operators-pkrsm" Dec 05 10:41:45 crc kubenswrapper[4796]: I1205 10:41:45.714223 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlzf2\" (UniqueName: \"kubernetes.io/projected/02d1fa9e-96e1-44c5-89dd-e2c619890cee-kube-api-access-mlzf2\") pod \"redhat-operators-pkrsm\" (UID: \"02d1fa9e-96e1-44c5-89dd-e2c619890cee\") " pod="openshift-marketplace/redhat-operators-pkrsm" Dec 05 10:41:45 crc kubenswrapper[4796]: I1205 10:41:45.813546 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkrsm" Dec 05 10:41:45 crc kubenswrapper[4796]: I1205 10:41:45.817184 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-644c454648-8vjkb" event={"ID":"8f219a85-81d2-4337-bed6-507debdb79dd","Type":"ContainerStarted","Data":"dbc4eed9d8042d52a87627d42f7710bdbe8d4b536cc539b9ed5ab03ca29197f6"} Dec 05 10:41:45 crc kubenswrapper[4796]: I1205 10:41:45.817347 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:45 crc kubenswrapper[4796]: I1205 10:41:45.817381 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-644c454648-8vjkb" Dec 05 10:41:45 crc kubenswrapper[4796]: I1205 10:41:45.845403 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-644c454648-8vjkb" podStartSLOduration=2.845383775 podStartE2EDuration="2.845383775s" podCreationTimestamp="2025-12-05 10:41:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:41:45.835997276 +0000 UTC m=+852.124102789" watchObservedRunningTime="2025-12-05 10:41:45.845383775 +0000 UTC m=+852.133489288" Dec 05 10:41:46 crc kubenswrapper[4796]: I1205 10:41:46.079763 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 10:41:46 crc kubenswrapper[4796]: I1205 10:41:46.080033 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 10:41:46 crc kubenswrapper[4796]: I1205 10:41:46.115091 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 10:41:46 crc kubenswrapper[4796]: I1205 10:41:46.122585 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 10:41:46 crc kubenswrapper[4796]: I1205 10:41:46.122636 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 10:41:46 crc kubenswrapper[4796]: I1205 10:41:46.148603 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 10:41:46 crc kubenswrapper[4796]: I1205 10:41:46.169139 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 10:41:46 crc kubenswrapper[4796]: I1205 10:41:46.175896 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 10:41:46 crc kubenswrapper[4796]: I1205 10:41:46.827027 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 10:41:46 crc kubenswrapper[4796]: I1205 10:41:46.827072 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 10:41:46 crc kubenswrapper[4796]: I1205 10:41:46.827083 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 10:41:46 crc kubenswrapper[4796]: I1205 10:41:46.827093 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 10:41:48 crc kubenswrapper[4796]: I1205 10:41:48.432334 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 10:41:48 crc kubenswrapper[4796]: I1205 10:41:48.435933 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 10:41:48 crc kubenswrapper[4796]: I1205 10:41:48.608033 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 10:41:48 crc kubenswrapper[4796]: I1205 10:41:48.838987 4796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 05 10:41:48 crc kubenswrapper[4796]: I1205 10:41:48.850115 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 10:41:50 crc kubenswrapper[4796]: I1205 10:41:50.554382 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-69fdddc9b6-2ckhp" podUID="fdca92fe-39ad-41e9-978b-1757290eee03" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.140:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.140:8443: connect: connection refused" Dec 05 10:41:50 crc kubenswrapper[4796]: I1205 10:41:50.602440 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78ddb58-f7j44" podUID="05623376-2343-40fb-a4df-508ce1e333e2" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.141:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.141:8443: connect: connection refused" Dec 05 10:41:50 crc kubenswrapper[4796]: I1205 10:41:50.868113 4796 generic.go:334] "Generic (PLEG): container finished" podID="d2395df9-6a76-405e-b346-ee634afb272c" containerID="48ff5b23dafcbc2a56d1da5ed9be272255df80129833f9880f658f5c376a34ab" exitCode=0 Dec 05 10:41:50 crc kubenswrapper[4796]: I1205 10:41:50.868153 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gqtd9" event={"ID":"d2395df9-6a76-405e-b346-ee634afb272c","Type":"ContainerDied","Data":"48ff5b23dafcbc2a56d1da5ed9be272255df80129833f9880f658f5c376a34ab"} Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.354837 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2fftm" Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.361272 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gqtd9" Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.402174 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b644d420-f868-46a7-9e03-1d6fc4f78894-combined-ca-bundle\") pod \"b644d420-f868-46a7-9e03-1d6fc4f78894\" (UID: \"b644d420-f868-46a7-9e03-1d6fc4f78894\") " Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.402413 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b644d420-f868-46a7-9e03-1d6fc4f78894-db-sync-config-data\") pod \"b644d420-f868-46a7-9e03-1d6fc4f78894\" (UID: \"b644d420-f868-46a7-9e03-1d6fc4f78894\") " Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.402452 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj47r\" (UniqueName: \"kubernetes.io/projected/b644d420-f868-46a7-9e03-1d6fc4f78894-kube-api-access-sj47r\") pod \"b644d420-f868-46a7-9e03-1d6fc4f78894\" (UID: \"b644d420-f868-46a7-9e03-1d6fc4f78894\") " Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.402665 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2395df9-6a76-405e-b346-ee634afb272c-combined-ca-bundle\") pod \"d2395df9-6a76-405e-b346-ee634afb272c\" (UID: \"d2395df9-6a76-405e-b346-ee634afb272c\") " Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.402768 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2395df9-6a76-405e-b346-ee634afb272c-config\") pod \"d2395df9-6a76-405e-b346-ee634afb272c\" (UID: \"d2395df9-6a76-405e-b346-ee634afb272c\") " Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.402832 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv2tk\" (UniqueName: \"kubernetes.io/projected/d2395df9-6a76-405e-b346-ee634afb272c-kube-api-access-sv2tk\") pod \"d2395df9-6a76-405e-b346-ee634afb272c\" (UID: \"d2395df9-6a76-405e-b346-ee634afb272c\") " Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.408415 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b644d420-f868-46a7-9e03-1d6fc4f78894-kube-api-access-sj47r" (OuterVolumeSpecName: "kube-api-access-sj47r") pod "b644d420-f868-46a7-9e03-1d6fc4f78894" (UID: "b644d420-f868-46a7-9e03-1d6fc4f78894"). InnerVolumeSpecName "kube-api-access-sj47r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.408537 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b644d420-f868-46a7-9e03-1d6fc4f78894-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b644d420-f868-46a7-9e03-1d6fc4f78894" (UID: "b644d420-f868-46a7-9e03-1d6fc4f78894"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.408971 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2395df9-6a76-405e-b346-ee634afb272c-kube-api-access-sv2tk" (OuterVolumeSpecName: "kube-api-access-sv2tk") pod "d2395df9-6a76-405e-b346-ee634afb272c" (UID: "d2395df9-6a76-405e-b346-ee634afb272c"). InnerVolumeSpecName "kube-api-access-sv2tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.424408 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2395df9-6a76-405e-b346-ee634afb272c-config" (OuterVolumeSpecName: "config") pod "d2395df9-6a76-405e-b346-ee634afb272c" (UID: "d2395df9-6a76-405e-b346-ee634afb272c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.425502 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b644d420-f868-46a7-9e03-1d6fc4f78894-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b644d420-f868-46a7-9e03-1d6fc4f78894" (UID: "b644d420-f868-46a7-9e03-1d6fc4f78894"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.426470 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2395df9-6a76-405e-b346-ee634afb272c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2395df9-6a76-405e-b346-ee634afb272c" (UID: "d2395df9-6a76-405e-b346-ee634afb272c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.505220 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv2tk\" (UniqueName: \"kubernetes.io/projected/d2395df9-6a76-405e-b346-ee634afb272c-kube-api-access-sv2tk\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.505262 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b644d420-f868-46a7-9e03-1d6fc4f78894-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.505272 4796 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b644d420-f868-46a7-9e03-1d6fc4f78894-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.505282 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj47r\" (UniqueName: \"kubernetes.io/projected/b644d420-f868-46a7-9e03-1d6fc4f78894-kube-api-access-sj47r\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.505292 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2395df9-6a76-405e-b346-ee634afb272c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.505301 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d2395df9-6a76-405e-b346-ee634afb272c-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.919148 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2fftm" event={"ID":"b644d420-f868-46a7-9e03-1d6fc4f78894","Type":"ContainerDied","Data":"6a23d423c17c73fc4ae74d6a8417f99c8fd56badf8a9c8f338677a1b4a69a2c5"} Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.919193 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a23d423c17c73fc4ae74d6a8417f99c8fd56badf8a9c8f338677a1b4a69a2c5" Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.919158 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2fftm" Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.921095 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-gqtd9" event={"ID":"d2395df9-6a76-405e-b346-ee634afb272c","Type":"ContainerDied","Data":"c0e546517a639bdf5c28ed3ef242baf1695b76d0d9bc472b1d8126caf174b303"} Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.921133 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0e546517a639bdf5c28ed3ef242baf1695b76d0d9bc472b1d8126caf174b303" Dec 05 10:41:56 crc kubenswrapper[4796]: I1205 10:41:56.921142 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-gqtd9" Dec 05 10:41:57 crc kubenswrapper[4796]: E1205 10:41:57.169956 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:7a2056615520e272bae43ec3f34e2ba7a92c1d364b8d9106b53bd694619fc9c2" Dec 05 10:41:57 crc kubenswrapper[4796]: E1205 10:41:57.170218 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:7a2056615520e272bae43ec3f34e2ba7a92c1d364b8d9106b53bd694619fc9c2,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h497x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-qh5mr_openstack(04e95ca1-d131-4528-baaf-0be6b98a5edf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 10:41:57 crc kubenswrapper[4796]: E1205 10:41:57.171419 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-qh5mr" podUID="04e95ca1-d131-4528-baaf-0be6b98a5edf" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.506607 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pkrsm"] Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.592855 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-9b577c7dc-s8nmt"] Dec 05 10:41:57 crc kubenswrapper[4796]: E1205 10:41:57.593203 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b644d420-f868-46a7-9e03-1d6fc4f78894" containerName="barbican-db-sync" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.593232 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b644d420-f868-46a7-9e03-1d6fc4f78894" containerName="barbican-db-sync" Dec 05 10:41:57 crc kubenswrapper[4796]: E1205 10:41:57.593255 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2395df9-6a76-405e-b346-ee634afb272c" containerName="neutron-db-sync" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.593261 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2395df9-6a76-405e-b346-ee634afb272c" containerName="neutron-db-sync" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.593458 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2395df9-6a76-405e-b346-ee634afb272c" containerName="neutron-db-sync" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.593481 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b644d420-f868-46a7-9e03-1d6fc4f78894" containerName="barbican-db-sync" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.594343 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9b577c7dc-s8nmt" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.601272 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.601390 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.601524 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-2f7st" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.603160 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-9b577c7dc-s8nmt"] Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.625624 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/116dc4c5-e13e-494d-8909-3a3e23c45ec1-config-data-custom\") pod \"barbican-worker-9b577c7dc-s8nmt\" (UID: \"116dc4c5-e13e-494d-8909-3a3e23c45ec1\") " pod="openstack/barbican-worker-9b577c7dc-s8nmt" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.625773 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/116dc4c5-e13e-494d-8909-3a3e23c45ec1-logs\") pod \"barbican-worker-9b577c7dc-s8nmt\" (UID: \"116dc4c5-e13e-494d-8909-3a3e23c45ec1\") " pod="openstack/barbican-worker-9b577c7dc-s8nmt" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.625866 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/116dc4c5-e13e-494d-8909-3a3e23c45ec1-combined-ca-bundle\") pod \"barbican-worker-9b577c7dc-s8nmt\" (UID: \"116dc4c5-e13e-494d-8909-3a3e23c45ec1\") " pod="openstack/barbican-worker-9b577c7dc-s8nmt" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.626004 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhx8j\" (UniqueName: \"kubernetes.io/projected/116dc4c5-e13e-494d-8909-3a3e23c45ec1-kube-api-access-mhx8j\") pod \"barbican-worker-9b577c7dc-s8nmt\" (UID: \"116dc4c5-e13e-494d-8909-3a3e23c45ec1\") " pod="openstack/barbican-worker-9b577c7dc-s8nmt" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.626078 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/116dc4c5-e13e-494d-8909-3a3e23c45ec1-config-data\") pod \"barbican-worker-9b577c7dc-s8nmt\" (UID: \"116dc4c5-e13e-494d-8909-3a3e23c45ec1\") " pod="openstack/barbican-worker-9b577c7dc-s8nmt" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.645198 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-849ff95dc5-zxk89"] Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.646730 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849ff95dc5-zxk89" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.660614 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-b4d7f4754-f9kqr"] Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.662067 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b4d7f4754-f9kqr" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.663404 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.680593 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-849ff95dc5-zxk89"] Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.708281 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-b4d7f4754-f9kqr"] Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.727788 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-config\") pod \"dnsmasq-dns-849ff95dc5-zxk89\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " pod="openstack/dnsmasq-dns-849ff95dc5-zxk89" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.727837 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-ovsdbserver-nb\") pod \"dnsmasq-dns-849ff95dc5-zxk89\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " pod="openstack/dnsmasq-dns-849ff95dc5-zxk89" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.727867 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-dns-svc\") pod \"dnsmasq-dns-849ff95dc5-zxk89\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " pod="openstack/dnsmasq-dns-849ff95dc5-zxk89" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.727889 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6edb9afc-40e5-4a55-bf3e-b77c4fe4951b-config-data\") pod \"barbican-keystone-listener-b4d7f4754-f9kqr\" (UID: \"6edb9afc-40e5-4a55-bf3e-b77c4fe4951b\") " pod="openstack/barbican-keystone-listener-b4d7f4754-f9kqr" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.727916 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6edb9afc-40e5-4a55-bf3e-b77c4fe4951b-combined-ca-bundle\") pod \"barbican-keystone-listener-b4d7f4754-f9kqr\" (UID: \"6edb9afc-40e5-4a55-bf3e-b77c4fe4951b\") " pod="openstack/barbican-keystone-listener-b4d7f4754-f9kqr" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.727939 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/116dc4c5-e13e-494d-8909-3a3e23c45ec1-config-data-custom\") pod \"barbican-worker-9b577c7dc-s8nmt\" (UID: \"116dc4c5-e13e-494d-8909-3a3e23c45ec1\") " pod="openstack/barbican-worker-9b577c7dc-s8nmt" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.727971 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6edb9afc-40e5-4a55-bf3e-b77c4fe4951b-logs\") pod \"barbican-keystone-listener-b4d7f4754-f9kqr\" (UID: \"6edb9afc-40e5-4a55-bf3e-b77c4fe4951b\") " pod="openstack/barbican-keystone-listener-b4d7f4754-f9kqr" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.727990 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/116dc4c5-e13e-494d-8909-3a3e23c45ec1-logs\") pod \"barbican-worker-9b577c7dc-s8nmt\" (UID: \"116dc4c5-e13e-494d-8909-3a3e23c45ec1\") " pod="openstack/barbican-worker-9b577c7dc-s8nmt" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.728012 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/116dc4c5-e13e-494d-8909-3a3e23c45ec1-combined-ca-bundle\") pod \"barbican-worker-9b577c7dc-s8nmt\" (UID: \"116dc4c5-e13e-494d-8909-3a3e23c45ec1\") " pod="openstack/barbican-worker-9b577c7dc-s8nmt" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.728030 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vrdb\" (UniqueName: \"kubernetes.io/projected/edc60b3b-668c-4c2a-b1d9-53d605a943a5-kube-api-access-9vrdb\") pod \"dnsmasq-dns-849ff95dc5-zxk89\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " pod="openstack/dnsmasq-dns-849ff95dc5-zxk89" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.728058 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-dns-swift-storage-0\") pod \"dnsmasq-dns-849ff95dc5-zxk89\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " pod="openstack/dnsmasq-dns-849ff95dc5-zxk89" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.728077 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6edb9afc-40e5-4a55-bf3e-b77c4fe4951b-config-data-custom\") pod \"barbican-keystone-listener-b4d7f4754-f9kqr\" (UID: \"6edb9afc-40e5-4a55-bf3e-b77c4fe4951b\") " pod="openstack/barbican-keystone-listener-b4d7f4754-f9kqr" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.728096 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdwxc\" (UniqueName: \"kubernetes.io/projected/6edb9afc-40e5-4a55-bf3e-b77c4fe4951b-kube-api-access-mdwxc\") pod \"barbican-keystone-listener-b4d7f4754-f9kqr\" (UID: \"6edb9afc-40e5-4a55-bf3e-b77c4fe4951b\") " pod="openstack/barbican-keystone-listener-b4d7f4754-f9kqr" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.728136 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-ovsdbserver-sb\") pod \"dnsmasq-dns-849ff95dc5-zxk89\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " pod="openstack/dnsmasq-dns-849ff95dc5-zxk89" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.728159 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhx8j\" (UniqueName: \"kubernetes.io/projected/116dc4c5-e13e-494d-8909-3a3e23c45ec1-kube-api-access-mhx8j\") pod \"barbican-worker-9b577c7dc-s8nmt\" (UID: \"116dc4c5-e13e-494d-8909-3a3e23c45ec1\") " pod="openstack/barbican-worker-9b577c7dc-s8nmt" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.728180 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/116dc4c5-e13e-494d-8909-3a3e23c45ec1-config-data\") pod \"barbican-worker-9b577c7dc-s8nmt\" (UID: \"116dc4c5-e13e-494d-8909-3a3e23c45ec1\") " pod="openstack/barbican-worker-9b577c7dc-s8nmt" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.728784 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/116dc4c5-e13e-494d-8909-3a3e23c45ec1-logs\") pod \"barbican-worker-9b577c7dc-s8nmt\" (UID: \"116dc4c5-e13e-494d-8909-3a3e23c45ec1\") " pod="openstack/barbican-worker-9b577c7dc-s8nmt" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.736661 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/116dc4c5-e13e-494d-8909-3a3e23c45ec1-config-data-custom\") pod \"barbican-worker-9b577c7dc-s8nmt\" (UID: \"116dc4c5-e13e-494d-8909-3a3e23c45ec1\") " pod="openstack/barbican-worker-9b577c7dc-s8nmt" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.736942 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/116dc4c5-e13e-494d-8909-3a3e23c45ec1-combined-ca-bundle\") pod \"barbican-worker-9b577c7dc-s8nmt\" (UID: \"116dc4c5-e13e-494d-8909-3a3e23c45ec1\") " pod="openstack/barbican-worker-9b577c7dc-s8nmt" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.755640 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/116dc4c5-e13e-494d-8909-3a3e23c45ec1-config-data\") pod \"barbican-worker-9b577c7dc-s8nmt\" (UID: \"116dc4c5-e13e-494d-8909-3a3e23c45ec1\") " pod="openstack/barbican-worker-9b577c7dc-s8nmt" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.763329 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhx8j\" (UniqueName: \"kubernetes.io/projected/116dc4c5-e13e-494d-8909-3a3e23c45ec1-kube-api-access-mhx8j\") pod \"barbican-worker-9b577c7dc-s8nmt\" (UID: \"116dc4c5-e13e-494d-8909-3a3e23c45ec1\") " pod="openstack/barbican-worker-9b577c7dc-s8nmt" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.792244 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-849ff95dc5-zxk89"] Dec 05 10:41:57 crc kubenswrapper[4796]: E1205 10:41:57.792963 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-9vrdb ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-849ff95dc5-zxk89" podUID="edc60b3b-668c-4c2a-b1d9-53d605a943a5" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.806237 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65dd957765-k8dtb"] Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.808109 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.824841 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65dd957765-k8dtb"] Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.829235 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6edb9afc-40e5-4a55-bf3e-b77c4fe4951b-config-data\") pod \"barbican-keystone-listener-b4d7f4754-f9kqr\" (UID: \"6edb9afc-40e5-4a55-bf3e-b77c4fe4951b\") " pod="openstack/barbican-keystone-listener-b4d7f4754-f9kqr" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.829341 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6edb9afc-40e5-4a55-bf3e-b77c4fe4951b-combined-ca-bundle\") pod \"barbican-keystone-listener-b4d7f4754-f9kqr\" (UID: \"6edb9afc-40e5-4a55-bf3e-b77c4fe4951b\") " pod="openstack/barbican-keystone-listener-b4d7f4754-f9kqr" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.829420 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpqxr\" (UniqueName: \"kubernetes.io/projected/48ef35ce-c0f4-47d7-b025-786c933f29f9-kube-api-access-hpqxr\") pod \"dnsmasq-dns-65dd957765-k8dtb\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.829507 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-ovsdbserver-nb\") pod \"dnsmasq-dns-65dd957765-k8dtb\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.829569 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6edb9afc-40e5-4a55-bf3e-b77c4fe4951b-logs\") pod \"barbican-keystone-listener-b4d7f4754-f9kqr\" (UID: \"6edb9afc-40e5-4a55-bf3e-b77c4fe4951b\") " pod="openstack/barbican-keystone-listener-b4d7f4754-f9kqr" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.829646 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vrdb\" (UniqueName: \"kubernetes.io/projected/edc60b3b-668c-4c2a-b1d9-53d605a943a5-kube-api-access-9vrdb\") pod \"dnsmasq-dns-849ff95dc5-zxk89\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " pod="openstack/dnsmasq-dns-849ff95dc5-zxk89" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.829738 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-dns-swift-storage-0\") pod \"dnsmasq-dns-849ff95dc5-zxk89\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " pod="openstack/dnsmasq-dns-849ff95dc5-zxk89" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.829811 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6edb9afc-40e5-4a55-bf3e-b77c4fe4951b-config-data-custom\") pod \"barbican-keystone-listener-b4d7f4754-f9kqr\" (UID: \"6edb9afc-40e5-4a55-bf3e-b77c4fe4951b\") " pod="openstack/barbican-keystone-listener-b4d7f4754-f9kqr" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.829873 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdwxc\" (UniqueName: \"kubernetes.io/projected/6edb9afc-40e5-4a55-bf3e-b77c4fe4951b-kube-api-access-mdwxc\") pod \"barbican-keystone-listener-b4d7f4754-f9kqr\" (UID: \"6edb9afc-40e5-4a55-bf3e-b77c4fe4951b\") " pod="openstack/barbican-keystone-listener-b4d7f4754-f9kqr" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.830005 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-ovsdbserver-sb\") pod \"dnsmasq-dns-65dd957765-k8dtb\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.830069 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-ovsdbserver-sb\") pod \"dnsmasq-dns-849ff95dc5-zxk89\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " pod="openstack/dnsmasq-dns-849ff95dc5-zxk89" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.830153 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-config\") pod \"dnsmasq-dns-849ff95dc5-zxk89\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " pod="openstack/dnsmasq-dns-849ff95dc5-zxk89" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.830223 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-dns-svc\") pod \"dnsmasq-dns-65dd957765-k8dtb\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.830286 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-config\") pod \"dnsmasq-dns-65dd957765-k8dtb\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.830364 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-ovsdbserver-nb\") pod \"dnsmasq-dns-849ff95dc5-zxk89\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " pod="openstack/dnsmasq-dns-849ff95dc5-zxk89" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.830436 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-dns-swift-storage-0\") pod \"dnsmasq-dns-65dd957765-k8dtb\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.830513 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-dns-svc\") pod \"dnsmasq-dns-849ff95dc5-zxk89\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " pod="openstack/dnsmasq-dns-849ff95dc5-zxk89" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.831305 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-dns-svc\") pod \"dnsmasq-dns-849ff95dc5-zxk89\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " pod="openstack/dnsmasq-dns-849ff95dc5-zxk89" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.832837 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6edb9afc-40e5-4a55-bf3e-b77c4fe4951b-logs\") pod \"barbican-keystone-listener-b4d7f4754-f9kqr\" (UID: \"6edb9afc-40e5-4a55-bf3e-b77c4fe4951b\") " pod="openstack/barbican-keystone-listener-b4d7f4754-f9kqr" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.833269 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-dns-swift-storage-0\") pod \"dnsmasq-dns-849ff95dc5-zxk89\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " pod="openstack/dnsmasq-dns-849ff95dc5-zxk89" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.834804 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-ovsdbserver-sb\") pod \"dnsmasq-dns-849ff95dc5-zxk89\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " pod="openstack/dnsmasq-dns-849ff95dc5-zxk89" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.835311 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-config\") pod \"dnsmasq-dns-849ff95dc5-zxk89\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " pod="openstack/dnsmasq-dns-849ff95dc5-zxk89" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.835344 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5cdb46c784-6qzbh"] Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.840754 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6edb9afc-40e5-4a55-bf3e-b77c4fe4951b-config-data\") pod \"barbican-keystone-listener-b4d7f4754-f9kqr\" (UID: \"6edb9afc-40e5-4a55-bf3e-b77c4fe4951b\") " pod="openstack/barbican-keystone-listener-b4d7f4754-f9kqr" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.841833 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6edb9afc-40e5-4a55-bf3e-b77c4fe4951b-combined-ca-bundle\") pod \"barbican-keystone-listener-b4d7f4754-f9kqr\" (UID: \"6edb9afc-40e5-4a55-bf3e-b77c4fe4951b\") " pod="openstack/barbican-keystone-listener-b4d7f4754-f9kqr" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.842269 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cdb46c784-6qzbh" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.843459 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.844308 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cdb46c784-6qzbh"] Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.847503 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-ovsdbserver-nb\") pod \"dnsmasq-dns-849ff95dc5-zxk89\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " pod="openstack/dnsmasq-dns-849ff95dc5-zxk89" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.848967 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6edb9afc-40e5-4a55-bf3e-b77c4fe4951b-config-data-custom\") pod \"barbican-keystone-listener-b4d7f4754-f9kqr\" (UID: \"6edb9afc-40e5-4a55-bf3e-b77c4fe4951b\") " pod="openstack/barbican-keystone-listener-b4d7f4754-f9kqr" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.852338 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6f5c67c464-8hmbc"] Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.874826 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f5c67c464-8hmbc" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.874931 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vrdb\" (UniqueName: \"kubernetes.io/projected/edc60b3b-668c-4c2a-b1d9-53d605a943a5-kube-api-access-9vrdb\") pod \"dnsmasq-dns-849ff95dc5-zxk89\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " pod="openstack/dnsmasq-dns-849ff95dc5-zxk89" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.879927 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.880079 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.880869 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.881147 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-277hs" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.882993 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdwxc\" (UniqueName: \"kubernetes.io/projected/6edb9afc-40e5-4a55-bf3e-b77c4fe4951b-kube-api-access-mdwxc\") pod \"barbican-keystone-listener-b4d7f4754-f9kqr\" (UID: \"6edb9afc-40e5-4a55-bf3e-b77c4fe4951b\") " pod="openstack/barbican-keystone-listener-b4d7f4754-f9kqr" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.913352 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f5c67c464-8hmbc"] Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.931643 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01d9c19c-9758-4d61-8a0d-53868923bfea","Type":"ContainerStarted","Data":"0e524c79406e51179c86d02cbd0cdbabb5530937449e821cb2f21f39994ef255"} Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.932244 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-ovsdbserver-nb\") pod \"dnsmasq-dns-65dd957765-k8dtb\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.932289 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0022df39-ef89-40d9-9be4-5297d4bd6dc5-logs\") pod \"barbican-api-5cdb46c784-6qzbh\" (UID: \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\") " pod="openstack/barbican-api-5cdb46c784-6qzbh" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.932313 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4c4n\" (UniqueName: \"kubernetes.io/projected/0c5562c0-e374-49b7-93da-78040b742805-kube-api-access-h4c4n\") pod \"neutron-6f5c67c464-8hmbc\" (UID: \"0c5562c0-e374-49b7-93da-78040b742805\") " pod="openstack/neutron-6f5c67c464-8hmbc" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.932439 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-combined-ca-bundle\") pod \"neutron-6f5c67c464-8hmbc\" (UID: \"0c5562c0-e374-49b7-93da-78040b742805\") " pod="openstack/neutron-6f5c67c464-8hmbc" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.932607 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-ovsdbserver-sb\") pod \"dnsmasq-dns-65dd957765-k8dtb\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.932676 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0022df39-ef89-40d9-9be4-5297d4bd6dc5-config-data-custom\") pod \"barbican-api-5cdb46c784-6qzbh\" (UID: \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\") " pod="openstack/barbican-api-5cdb46c784-6qzbh" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.932728 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0022df39-ef89-40d9-9be4-5297d4bd6dc5-combined-ca-bundle\") pod \"barbican-api-5cdb46c784-6qzbh\" (UID: \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\") " pod="openstack/barbican-api-5cdb46c784-6qzbh" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.932749 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-config\") pod \"neutron-6f5c67c464-8hmbc\" (UID: \"0c5562c0-e374-49b7-93da-78040b742805\") " pod="openstack/neutron-6f5c67c464-8hmbc" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.932765 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m9jk\" (UniqueName: \"kubernetes.io/projected/0022df39-ef89-40d9-9be4-5297d4bd6dc5-kube-api-access-4m9jk\") pod \"barbican-api-5cdb46c784-6qzbh\" (UID: \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\") " pod="openstack/barbican-api-5cdb46c784-6qzbh" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.932814 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-httpd-config\") pod \"neutron-6f5c67c464-8hmbc\" (UID: \"0c5562c0-e374-49b7-93da-78040b742805\") " pod="openstack/neutron-6f5c67c464-8hmbc" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.932871 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-dns-svc\") pod \"dnsmasq-dns-65dd957765-k8dtb\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.932904 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-config\") pod \"dnsmasq-dns-65dd957765-k8dtb\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.932929 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0022df39-ef89-40d9-9be4-5297d4bd6dc5-config-data\") pod \"barbican-api-5cdb46c784-6qzbh\" (UID: \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\") " pod="openstack/barbican-api-5cdb46c784-6qzbh" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.932949 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-dns-swift-storage-0\") pod \"dnsmasq-dns-65dd957765-k8dtb\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.933306 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-ovsdbserver-nb\") pod \"dnsmasq-dns-65dd957765-k8dtb\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.933399 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-ovsdbserver-sb\") pod \"dnsmasq-dns-65dd957765-k8dtb\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.933593 4796 generic.go:334] "Generic (PLEG): container finished" podID="02d1fa9e-96e1-44c5-89dd-e2c619890cee" containerID="7a49a8a2575ab507a720e314ef5719763f8a5d44cc8ef6277741a3e650e05b59" exitCode=0 Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.933658 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-dns-svc\") pod \"dnsmasq-dns-65dd957765-k8dtb\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.933710 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkrsm" event={"ID":"02d1fa9e-96e1-44c5-89dd-e2c619890cee","Type":"ContainerDied","Data":"7a49a8a2575ab507a720e314ef5719763f8a5d44cc8ef6277741a3e650e05b59"} Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.933677 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849ff95dc5-zxk89" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.933793 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-ovndb-tls-certs\") pod \"neutron-6f5c67c464-8hmbc\" (UID: \"0c5562c0-e374-49b7-93da-78040b742805\") " pod="openstack/neutron-6f5c67c464-8hmbc" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.933852 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpqxr\" (UniqueName: \"kubernetes.io/projected/48ef35ce-c0f4-47d7-b025-786c933f29f9-kube-api-access-hpqxr\") pod \"dnsmasq-dns-65dd957765-k8dtb\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.934045 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-config\") pod \"dnsmasq-dns-65dd957765-k8dtb\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.934135 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkrsm" event={"ID":"02d1fa9e-96e1-44c5-89dd-e2c619890cee","Type":"ContainerStarted","Data":"44d4c22aa7f7a963ff118788818db532e5f40854af6f7de105d9c2250d95e1f5"} Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.934393 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-dns-swift-storage-0\") pod \"dnsmasq-dns-65dd957765-k8dtb\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:41:57 crc kubenswrapper[4796]: E1205 10:41:57.935839 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:7a2056615520e272bae43ec3f34e2ba7a92c1d364b8d9106b53bd694619fc9c2\\\"\"" pod="openstack/cinder-db-sync-qh5mr" podUID="04e95ca1-d131-4528-baaf-0be6b98a5edf" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.941731 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849ff95dc5-zxk89" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.953558 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpqxr\" (UniqueName: \"kubernetes.io/projected/48ef35ce-c0f4-47d7-b025-786c933f29f9-kube-api-access-hpqxr\") pod \"dnsmasq-dns-65dd957765-k8dtb\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:41:57 crc kubenswrapper[4796]: I1205 10:41:57.971800 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9b577c7dc-s8nmt" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.012432 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b4d7f4754-f9kqr" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.035772 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-ovsdbserver-nb\") pod \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.035936 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-dns-svc\") pod \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.036024 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-ovsdbserver-sb\") pod \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.036139 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-dns-swift-storage-0\") pod \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.036373 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vrdb\" (UniqueName: \"kubernetes.io/projected/edc60b3b-668c-4c2a-b1d9-53d605a943a5-kube-api-access-9vrdb\") pod \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.036466 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-config\") pod \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\" (UID: \"edc60b3b-668c-4c2a-b1d9-53d605a943a5\") " Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.036589 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "edc60b3b-668c-4c2a-b1d9-53d605a943a5" (UID: "edc60b3b-668c-4c2a-b1d9-53d605a943a5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.036723 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-httpd-config\") pod \"neutron-6f5c67c464-8hmbc\" (UID: \"0c5562c0-e374-49b7-93da-78040b742805\") " pod="openstack/neutron-6f5c67c464-8hmbc" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.036784 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0022df39-ef89-40d9-9be4-5297d4bd6dc5-config-data\") pod \"barbican-api-5cdb46c784-6qzbh\" (UID: \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\") " pod="openstack/barbican-api-5cdb46c784-6qzbh" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.036827 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-ovndb-tls-certs\") pod \"neutron-6f5c67c464-8hmbc\" (UID: \"0c5562c0-e374-49b7-93da-78040b742805\") " pod="openstack/neutron-6f5c67c464-8hmbc" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.036883 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0022df39-ef89-40d9-9be4-5297d4bd6dc5-logs\") pod \"barbican-api-5cdb46c784-6qzbh\" (UID: \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\") " pod="openstack/barbican-api-5cdb46c784-6qzbh" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.036902 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4c4n\" (UniqueName: \"kubernetes.io/projected/0c5562c0-e374-49b7-93da-78040b742805-kube-api-access-h4c4n\") pod \"neutron-6f5c67c464-8hmbc\" (UID: \"0c5562c0-e374-49b7-93da-78040b742805\") " pod="openstack/neutron-6f5c67c464-8hmbc" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.036946 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-combined-ca-bundle\") pod \"neutron-6f5c67c464-8hmbc\" (UID: \"0c5562c0-e374-49b7-93da-78040b742805\") " pod="openstack/neutron-6f5c67c464-8hmbc" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.036954 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "edc60b3b-668c-4c2a-b1d9-53d605a943a5" (UID: "edc60b3b-668c-4c2a-b1d9-53d605a943a5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.037001 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "edc60b3b-668c-4c2a-b1d9-53d605a943a5" (UID: "edc60b3b-668c-4c2a-b1d9-53d605a943a5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.037080 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0022df39-ef89-40d9-9be4-5297d4bd6dc5-config-data-custom\") pod \"barbican-api-5cdb46c784-6qzbh\" (UID: \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\") " pod="openstack/barbican-api-5cdb46c784-6qzbh" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.037104 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0022df39-ef89-40d9-9be4-5297d4bd6dc5-combined-ca-bundle\") pod \"barbican-api-5cdb46c784-6qzbh\" (UID: \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\") " pod="openstack/barbican-api-5cdb46c784-6qzbh" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.037124 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-config\") pod \"neutron-6f5c67c464-8hmbc\" (UID: \"0c5562c0-e374-49b7-93da-78040b742805\") " pod="openstack/neutron-6f5c67c464-8hmbc" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.037145 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m9jk\" (UniqueName: \"kubernetes.io/projected/0022df39-ef89-40d9-9be4-5297d4bd6dc5-kube-api-access-4m9jk\") pod \"barbican-api-5cdb46c784-6qzbh\" (UID: \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\") " pod="openstack/barbican-api-5cdb46c784-6qzbh" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.037193 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.037293 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.037306 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.037928 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "edc60b3b-668c-4c2a-b1d9-53d605a943a5" (UID: "edc60b3b-668c-4c2a-b1d9-53d605a943a5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.040328 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-config" (OuterVolumeSpecName: "config") pod "edc60b3b-668c-4c2a-b1d9-53d605a943a5" (UID: "edc60b3b-668c-4c2a-b1d9-53d605a943a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.040515 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc60b3b-668c-4c2a-b1d9-53d605a943a5-kube-api-access-9vrdb" (OuterVolumeSpecName: "kube-api-access-9vrdb") pod "edc60b3b-668c-4c2a-b1d9-53d605a943a5" (UID: "edc60b3b-668c-4c2a-b1d9-53d605a943a5"). InnerVolumeSpecName "kube-api-access-9vrdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.040664 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0022df39-ef89-40d9-9be4-5297d4bd6dc5-logs\") pod \"barbican-api-5cdb46c784-6qzbh\" (UID: \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\") " pod="openstack/barbican-api-5cdb46c784-6qzbh" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.041311 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-combined-ca-bundle\") pod \"neutron-6f5c67c464-8hmbc\" (UID: \"0c5562c0-e374-49b7-93da-78040b742805\") " pod="openstack/neutron-6f5c67c464-8hmbc" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.044334 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0022df39-ef89-40d9-9be4-5297d4bd6dc5-combined-ca-bundle\") pod \"barbican-api-5cdb46c784-6qzbh\" (UID: \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\") " pod="openstack/barbican-api-5cdb46c784-6qzbh" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.044503 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-config\") pod \"neutron-6f5c67c464-8hmbc\" (UID: \"0c5562c0-e374-49b7-93da-78040b742805\") " pod="openstack/neutron-6f5c67c464-8hmbc" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.048854 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-ovndb-tls-certs\") pod \"neutron-6f5c67c464-8hmbc\" (UID: \"0c5562c0-e374-49b7-93da-78040b742805\") " pod="openstack/neutron-6f5c67c464-8hmbc" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.049414 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-httpd-config\") pod \"neutron-6f5c67c464-8hmbc\" (UID: \"0c5562c0-e374-49b7-93da-78040b742805\") " pod="openstack/neutron-6f5c67c464-8hmbc" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.049784 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0022df39-ef89-40d9-9be4-5297d4bd6dc5-config-data-custom\") pod \"barbican-api-5cdb46c784-6qzbh\" (UID: \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\") " pod="openstack/barbican-api-5cdb46c784-6qzbh" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.055528 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0022df39-ef89-40d9-9be4-5297d4bd6dc5-config-data\") pod \"barbican-api-5cdb46c784-6qzbh\" (UID: \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\") " pod="openstack/barbican-api-5cdb46c784-6qzbh" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.058459 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m9jk\" (UniqueName: \"kubernetes.io/projected/0022df39-ef89-40d9-9be4-5297d4bd6dc5-kube-api-access-4m9jk\") pod \"barbican-api-5cdb46c784-6qzbh\" (UID: \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\") " pod="openstack/barbican-api-5cdb46c784-6qzbh" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.058995 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4c4n\" (UniqueName: \"kubernetes.io/projected/0c5562c0-e374-49b7-93da-78040b742805-kube-api-access-h4c4n\") pod \"neutron-6f5c67c464-8hmbc\" (UID: \"0c5562c0-e374-49b7-93da-78040b742805\") " pod="openstack/neutron-6f5c67c464-8hmbc" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.140054 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vrdb\" (UniqueName: \"kubernetes.io/projected/edc60b3b-668c-4c2a-b1d9-53d605a943a5-kube-api-access-9vrdb\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.140643 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.140653 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edc60b3b-668c-4c2a-b1d9-53d605a943a5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.178351 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.195184 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cdb46c784-6qzbh" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.200723 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f5c67c464-8hmbc" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.396629 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-9b577c7dc-s8nmt"] Dec 05 10:41:58 crc kubenswrapper[4796]: W1205 10:41:58.400444 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod116dc4c5_e13e_494d_8909_3a3e23c45ec1.slice/crio-bf741de83996351c5e8d80469d578bccb38a87f1fbebde33fadfe3193e914201 WatchSource:0}: Error finding container bf741de83996351c5e8d80469d578bccb38a87f1fbebde33fadfe3193e914201: Status 404 returned error can't find the container with id bf741de83996351c5e8d80469d578bccb38a87f1fbebde33fadfe3193e914201 Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.472349 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-b4d7f4754-f9kqr"] Dec 05 10:41:58 crc kubenswrapper[4796]: W1205 10:41:58.482455 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6edb9afc_40e5_4a55_bf3e_b77c4fe4951b.slice/crio-277d8e5dcde67f3726b3efa690be2b568ca26e04be42a9cafc0dcf8ed04817f8 WatchSource:0}: Error finding container 277d8e5dcde67f3726b3efa690be2b568ca26e04be42a9cafc0dcf8ed04817f8: Status 404 returned error can't find the container with id 277d8e5dcde67f3726b3efa690be2b568ca26e04be42a9cafc0dcf8ed04817f8 Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.638505 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65dd957765-k8dtb"] Dec 05 10:41:58 crc kubenswrapper[4796]: W1205 10:41:58.657324 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48ef35ce_c0f4_47d7_b025_786c933f29f9.slice/crio-dd8ddd9d5210b5e60be2d2aa2c311f253f6f309d81d5aef487fb7f8d1849f554 WatchSource:0}: Error finding container dd8ddd9d5210b5e60be2d2aa2c311f253f6f309d81d5aef487fb7f8d1849f554: Status 404 returned error can't find the container with id dd8ddd9d5210b5e60be2d2aa2c311f253f6f309d81d5aef487fb7f8d1849f554 Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.694653 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cdb46c784-6qzbh"] Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.859634 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f5c67c464-8hmbc"] Dec 05 10:41:58 crc kubenswrapper[4796]: W1205 10:41:58.918824 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c5562c0_e374_49b7_93da_78040b742805.slice/crio-6f2777da5e1b5f2633b97e6981c54c380e68b1866d5cb8142bd8bc7336e98f19 WatchSource:0}: Error finding container 6f2777da5e1b5f2633b97e6981c54c380e68b1866d5cb8142bd8bc7336e98f19: Status 404 returned error can't find the container with id 6f2777da5e1b5f2633b97e6981c54c380e68b1866d5cb8142bd8bc7336e98f19 Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.942253 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b4d7f4754-f9kqr" event={"ID":"6edb9afc-40e5-4a55-bf3e-b77c4fe4951b","Type":"ContainerStarted","Data":"277d8e5dcde67f3726b3efa690be2b568ca26e04be42a9cafc0dcf8ed04817f8"} Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.945465 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9b577c7dc-s8nmt" event={"ID":"116dc4c5-e13e-494d-8909-3a3e23c45ec1","Type":"ContainerStarted","Data":"bf741de83996351c5e8d80469d578bccb38a87f1fbebde33fadfe3193e914201"} Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.946870 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cdb46c784-6qzbh" event={"ID":"0022df39-ef89-40d9-9be4-5297d4bd6dc5","Type":"ContainerStarted","Data":"956720d81920982a3e64e227e1218a04fd6f867cf91263ad9cd1f8e1c197bcba"} Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.946906 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cdb46c784-6qzbh" event={"ID":"0022df39-ef89-40d9-9be4-5297d4bd6dc5","Type":"ContainerStarted","Data":"b5c6a72ec5990afdcc8a05cc5ef1e99eccc71baaad77d4083cf3920f07f94a97"} Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.947737 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f5c67c464-8hmbc" event={"ID":"0c5562c0-e374-49b7-93da-78040b742805","Type":"ContainerStarted","Data":"6f2777da5e1b5f2633b97e6981c54c380e68b1866d5cb8142bd8bc7336e98f19"} Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.959531 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkrsm" event={"ID":"02d1fa9e-96e1-44c5-89dd-e2c619890cee","Type":"ContainerStarted","Data":"fbd7b50b7db938066b734438baa9c727cd55bf66a79167a70cc5178bfb225320"} Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.975600 4796 generic.go:334] "Generic (PLEG): container finished" podID="48ef35ce-c0f4-47d7-b025-786c933f29f9" containerID="76d436c60123508c75fc6c36b79ba7932596555c9e648b64215babec56330941" exitCode=0 Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.975673 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-849ff95dc5-zxk89" Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.976129 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65dd957765-k8dtb" event={"ID":"48ef35ce-c0f4-47d7-b025-786c933f29f9","Type":"ContainerDied","Data":"76d436c60123508c75fc6c36b79ba7932596555c9e648b64215babec56330941"} Dec 05 10:41:58 crc kubenswrapper[4796]: I1205 10:41:58.976154 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65dd957765-k8dtb" event={"ID":"48ef35ce-c0f4-47d7-b025-786c933f29f9","Type":"ContainerStarted","Data":"dd8ddd9d5210b5e60be2d2aa2c311f253f6f309d81d5aef487fb7f8d1849f554"} Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.041968 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-849ff95dc5-zxk89"] Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.049347 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-849ff95dc5-zxk89"] Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.662338 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-fb965878c-qncj9"] Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.663879 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.665254 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.674109 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.679444 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-fb965878c-qncj9"] Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.803332 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/22c4293b-736f-4c5a-b47d-6a0a870bf1da-httpd-config\") pod \"neutron-fb965878c-qncj9\" (UID: \"22c4293b-736f-4c5a-b47d-6a0a870bf1da\") " pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.803395 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22c4293b-736f-4c5a-b47d-6a0a870bf1da-internal-tls-certs\") pod \"neutron-fb965878c-qncj9\" (UID: \"22c4293b-736f-4c5a-b47d-6a0a870bf1da\") " pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.803436 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22c4293b-736f-4c5a-b47d-6a0a870bf1da-public-tls-certs\") pod \"neutron-fb965878c-qncj9\" (UID: \"22c4293b-736f-4c5a-b47d-6a0a870bf1da\") " pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.803797 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz6v7\" (UniqueName: \"kubernetes.io/projected/22c4293b-736f-4c5a-b47d-6a0a870bf1da-kube-api-access-dz6v7\") pod \"neutron-fb965878c-qncj9\" (UID: \"22c4293b-736f-4c5a-b47d-6a0a870bf1da\") " pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.803992 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/22c4293b-736f-4c5a-b47d-6a0a870bf1da-ovndb-tls-certs\") pod \"neutron-fb965878c-qncj9\" (UID: \"22c4293b-736f-4c5a-b47d-6a0a870bf1da\") " pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.805441 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22c4293b-736f-4c5a-b47d-6a0a870bf1da-combined-ca-bundle\") pod \"neutron-fb965878c-qncj9\" (UID: \"22c4293b-736f-4c5a-b47d-6a0a870bf1da\") " pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.805531 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/22c4293b-736f-4c5a-b47d-6a0a870bf1da-config\") pod \"neutron-fb965878c-qncj9\" (UID: \"22c4293b-736f-4c5a-b47d-6a0a870bf1da\") " pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.907542 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22c4293b-736f-4c5a-b47d-6a0a870bf1da-combined-ca-bundle\") pod \"neutron-fb965878c-qncj9\" (UID: \"22c4293b-736f-4c5a-b47d-6a0a870bf1da\") " pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.907592 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/22c4293b-736f-4c5a-b47d-6a0a870bf1da-config\") pod \"neutron-fb965878c-qncj9\" (UID: \"22c4293b-736f-4c5a-b47d-6a0a870bf1da\") " pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.907670 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/22c4293b-736f-4c5a-b47d-6a0a870bf1da-httpd-config\") pod \"neutron-fb965878c-qncj9\" (UID: \"22c4293b-736f-4c5a-b47d-6a0a870bf1da\") " pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.907706 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22c4293b-736f-4c5a-b47d-6a0a870bf1da-internal-tls-certs\") pod \"neutron-fb965878c-qncj9\" (UID: \"22c4293b-736f-4c5a-b47d-6a0a870bf1da\") " pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.907730 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22c4293b-736f-4c5a-b47d-6a0a870bf1da-public-tls-certs\") pod \"neutron-fb965878c-qncj9\" (UID: \"22c4293b-736f-4c5a-b47d-6a0a870bf1da\") " pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.907759 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz6v7\" (UniqueName: \"kubernetes.io/projected/22c4293b-736f-4c5a-b47d-6a0a870bf1da-kube-api-access-dz6v7\") pod \"neutron-fb965878c-qncj9\" (UID: \"22c4293b-736f-4c5a-b47d-6a0a870bf1da\") " pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.907786 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/22c4293b-736f-4c5a-b47d-6a0a870bf1da-ovndb-tls-certs\") pod \"neutron-fb965878c-qncj9\" (UID: \"22c4293b-736f-4c5a-b47d-6a0a870bf1da\") " pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.913088 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/22c4293b-736f-4c5a-b47d-6a0a870bf1da-public-tls-certs\") pod \"neutron-fb965878c-qncj9\" (UID: \"22c4293b-736f-4c5a-b47d-6a0a870bf1da\") " pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.913114 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22c4293b-736f-4c5a-b47d-6a0a870bf1da-combined-ca-bundle\") pod \"neutron-fb965878c-qncj9\" (UID: \"22c4293b-736f-4c5a-b47d-6a0a870bf1da\") " pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.913548 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/22c4293b-736f-4c5a-b47d-6a0a870bf1da-httpd-config\") pod \"neutron-fb965878c-qncj9\" (UID: \"22c4293b-736f-4c5a-b47d-6a0a870bf1da\") " pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.914869 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/22c4293b-736f-4c5a-b47d-6a0a870bf1da-config\") pod \"neutron-fb965878c-qncj9\" (UID: \"22c4293b-736f-4c5a-b47d-6a0a870bf1da\") " pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.919839 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/22c4293b-736f-4c5a-b47d-6a0a870bf1da-internal-tls-certs\") pod \"neutron-fb965878c-qncj9\" (UID: \"22c4293b-736f-4c5a-b47d-6a0a870bf1da\") " pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.924244 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/22c4293b-736f-4c5a-b47d-6a0a870bf1da-ovndb-tls-certs\") pod \"neutron-fb965878c-qncj9\" (UID: \"22c4293b-736f-4c5a-b47d-6a0a870bf1da\") " pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.950593 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz6v7\" (UniqueName: \"kubernetes.io/projected/22c4293b-736f-4c5a-b47d-6a0a870bf1da-kube-api-access-dz6v7\") pod \"neutron-fb965878c-qncj9\" (UID: \"22c4293b-736f-4c5a-b47d-6a0a870bf1da\") " pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:41:59 crc kubenswrapper[4796]: I1205 10:41:59.978479 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:42:00 crc kubenswrapper[4796]: I1205 10:42:00.008167 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65dd957765-k8dtb" event={"ID":"48ef35ce-c0f4-47d7-b025-786c933f29f9","Type":"ContainerStarted","Data":"827fc1b8dbc4d9127c757c3c2c09072a889a65b967d2563ef4c713cc89262a7c"} Dec 05 10:42:00 crc kubenswrapper[4796]: I1205 10:42:00.008427 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:42:00 crc kubenswrapper[4796]: I1205 10:42:00.023853 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cdb46c784-6qzbh" event={"ID":"0022df39-ef89-40d9-9be4-5297d4bd6dc5","Type":"ContainerStarted","Data":"d9a68af4d90a621eff783dc744beea386f5dbed9856d09adbdab0b03ed810dcb"} Dec 05 10:42:00 crc kubenswrapper[4796]: I1205 10:42:00.024490 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cdb46c784-6qzbh" Dec 05 10:42:00 crc kubenswrapper[4796]: I1205 10:42:00.024621 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cdb46c784-6qzbh" Dec 05 10:42:00 crc kubenswrapper[4796]: I1205 10:42:00.030191 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f5c67c464-8hmbc" event={"ID":"0c5562c0-e374-49b7-93da-78040b742805","Type":"ContainerStarted","Data":"4a3f3b07f141019e83de6953dc7badc4c42fb749307ea42afd566cdeb6b8110c"} Dec 05 10:42:00 crc kubenswrapper[4796]: I1205 10:42:00.040485 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65dd957765-k8dtb" podStartSLOduration=3.040454114 podStartE2EDuration="3.040454114s" podCreationTimestamp="2025-12-05 10:41:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:42:00.030075018 +0000 UTC m=+866.318180530" watchObservedRunningTime="2025-12-05 10:42:00.040454114 +0000 UTC m=+866.328559628" Dec 05 10:42:00 crc kubenswrapper[4796]: I1205 10:42:00.044412 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edc60b3b-668c-4c2a-b1d9-53d605a943a5" path="/var/lib/kubelet/pods/edc60b3b-668c-4c2a-b1d9-53d605a943a5/volumes" Dec 05 10:42:00 crc kubenswrapper[4796]: I1205 10:42:00.056346 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5cdb46c784-6qzbh" podStartSLOduration=3.056327409 podStartE2EDuration="3.056327409s" podCreationTimestamp="2025-12-05 10:41:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:42:00.055108476 +0000 UTC m=+866.343213979" watchObservedRunningTime="2025-12-05 10:42:00.056327409 +0000 UTC m=+866.344432921" Dec 05 10:42:01 crc kubenswrapper[4796]: I1205 10:42:01.043734 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f5c67c464-8hmbc" event={"ID":"0c5562c0-e374-49b7-93da-78040b742805","Type":"ContainerStarted","Data":"87fcdcfc32559969336f51c57e8712b9a25d66666df0a9b8dc53381299bb8b05"} Dec 05 10:42:01 crc kubenswrapper[4796]: I1205 10:42:01.044236 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6f5c67c464-8hmbc" Dec 05 10:42:01 crc kubenswrapper[4796]: I1205 10:42:01.050757 4796 generic.go:334] "Generic (PLEG): container finished" podID="02d1fa9e-96e1-44c5-89dd-e2c619890cee" containerID="fbd7b50b7db938066b734438baa9c727cd55bf66a79167a70cc5178bfb225320" exitCode=0 Dec 05 10:42:01 crc kubenswrapper[4796]: I1205 10:42:01.052982 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkrsm" event={"ID":"02d1fa9e-96e1-44c5-89dd-e2c619890cee","Type":"ContainerDied","Data":"fbd7b50b7db938066b734438baa9c727cd55bf66a79167a70cc5178bfb225320"} Dec 05 10:42:01 crc kubenswrapper[4796]: I1205 10:42:01.066763 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6f5c67c464-8hmbc" podStartSLOduration=4.066735778 podStartE2EDuration="4.066735778s" podCreationTimestamp="2025-12-05 10:41:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:42:01.05760039 +0000 UTC m=+867.345705904" watchObservedRunningTime="2025-12-05 10:42:01.066735778 +0000 UTC m=+867.354841280" Dec 05 10:42:01 crc kubenswrapper[4796]: I1205 10:42:01.225918 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-fb965878c-qncj9"] Dec 05 10:42:02 crc kubenswrapper[4796]: I1205 10:42:02.065131 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fb965878c-qncj9" event={"ID":"22c4293b-736f-4c5a-b47d-6a0a870bf1da","Type":"ContainerStarted","Data":"e89c75bcaebb46a9d7421e29cfc89f2bfc29d7b0833453f00585d48fe523773e"} Dec 05 10:42:02 crc kubenswrapper[4796]: I1205 10:42:02.065644 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fb965878c-qncj9" event={"ID":"22c4293b-736f-4c5a-b47d-6a0a870bf1da","Type":"ContainerStarted","Data":"9afec8d6cdca715f78639e04bc255e2109d8e823a16b0b6be33097dc8e87dfc5"} Dec 05 10:42:02 crc kubenswrapper[4796]: I1205 10:42:02.066987 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b4d7f4754-f9kqr" event={"ID":"6edb9afc-40e5-4a55-bf3e-b77c4fe4951b","Type":"ContainerStarted","Data":"992d931163b89c6c8f3780827ec42773d06b951ed046cb8b1570e02053ac7820"} Dec 05 10:42:02 crc kubenswrapper[4796]: I1205 10:42:02.071176 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9b577c7dc-s8nmt" event={"ID":"116dc4c5-e13e-494d-8909-3a3e23c45ec1","Type":"ContainerStarted","Data":"0a1bec4c53b6bb29c538feaa13de0f1e139892117a52d5c5341b1eca2ead7a20"} Dec 05 10:42:02 crc kubenswrapper[4796]: I1205 10:42:02.185021 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:42:02 crc kubenswrapper[4796]: I1205 10:42:02.360471 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:42:03 crc kubenswrapper[4796]: I1205 10:42:03.078243 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fb965878c-qncj9" event={"ID":"22c4293b-736f-4c5a-b47d-6a0a870bf1da","Type":"ContainerStarted","Data":"066d9820ce95948fadea67ac5b881233159a19ba3fce25857b2b9d9cd7642b3d"} Dec 05 10:42:03 crc kubenswrapper[4796]: I1205 10:42:03.078981 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:42:03 crc kubenswrapper[4796]: I1205 10:42:03.081533 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkrsm" event={"ID":"02d1fa9e-96e1-44c5-89dd-e2c619890cee","Type":"ContainerStarted","Data":"27f365494cbefc6f66de48e7357f7d3cbf903d85117b37ffaaa0daf8e8ef7250"} Dec 05 10:42:03 crc kubenswrapper[4796]: I1205 10:42:03.082959 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b4d7f4754-f9kqr" event={"ID":"6edb9afc-40e5-4a55-bf3e-b77c4fe4951b","Type":"ContainerStarted","Data":"b6912e8748f3a5095e2aa980142542c9843ae103a79c6c4ed651bd78591acb16"} Dec 05 10:42:03 crc kubenswrapper[4796]: I1205 10:42:03.085196 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9b577c7dc-s8nmt" event={"ID":"116dc4c5-e13e-494d-8909-3a3e23c45ec1","Type":"ContainerStarted","Data":"09a6fd3541f24c71c9d71990f815f4489a2ced4a165936d3e8bc6cc17ee0fc1b"} Dec 05 10:42:03 crc kubenswrapper[4796]: I1205 10:42:03.104804 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-fb965878c-qncj9" podStartSLOduration=4.104779727 podStartE2EDuration="4.104779727s" podCreationTimestamp="2025-12-05 10:41:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:42:03.096490462 +0000 UTC m=+869.384595974" watchObservedRunningTime="2025-12-05 10:42:03.104779727 +0000 UTC m=+869.392885240" Dec 05 10:42:03 crc kubenswrapper[4796]: I1205 10:42:03.116999 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-b4d7f4754-f9kqr" podStartSLOduration=3.118187213 podStartE2EDuration="6.11697987s" podCreationTimestamp="2025-12-05 10:41:57 +0000 UTC" firstStartedPulling="2025-12-05 10:41:58.484159867 +0000 UTC m=+864.772265380" lastFinishedPulling="2025-12-05 10:42:01.482952525 +0000 UTC m=+867.771058037" observedRunningTime="2025-12-05 10:42:03.115676079 +0000 UTC m=+869.403781591" watchObservedRunningTime="2025-12-05 10:42:03.11697987 +0000 UTC m=+869.405085384" Dec 05 10:42:03 crc kubenswrapper[4796]: I1205 10:42:03.130170 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-9b577c7dc-s8nmt" podStartSLOduration=3.047788052 podStartE2EDuration="6.130157984s" podCreationTimestamp="2025-12-05 10:41:57 +0000 UTC" firstStartedPulling="2025-12-05 10:41:58.40251048 +0000 UTC m=+864.690615994" lastFinishedPulling="2025-12-05 10:42:01.484880412 +0000 UTC m=+867.772985926" observedRunningTime="2025-12-05 10:42:03.128441845 +0000 UTC m=+869.416547358" watchObservedRunningTime="2025-12-05 10:42:03.130157984 +0000 UTC m=+869.418263497" Dec 05 10:42:03 crc kubenswrapper[4796]: I1205 10:42:03.150312 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pkrsm" podStartSLOduration=14.23518033 podStartE2EDuration="18.150291774s" podCreationTimestamp="2025-12-05 10:41:45 +0000 UTC" firstStartedPulling="2025-12-05 10:41:57.935793501 +0000 UTC m=+864.223899015" lastFinishedPulling="2025-12-05 10:42:01.850904946 +0000 UTC m=+868.139010459" observedRunningTime="2025-12-05 10:42:03.144814339 +0000 UTC m=+869.432919852" watchObservedRunningTime="2025-12-05 10:42:03.150291774 +0000 UTC m=+869.438397287" Dec 05 10:42:03 crc kubenswrapper[4796]: I1205 10:42:03.720886 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:42:03 crc kubenswrapper[4796]: I1205 10:42:03.887388 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-56d7c49fdd-qssn9"] Dec 05 10:42:03 crc kubenswrapper[4796]: I1205 10:42:03.888637 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:03 crc kubenswrapper[4796]: I1205 10:42:03.897665 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 05 10:42:03 crc kubenswrapper[4796]: I1205 10:42:03.898661 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56d7c49fdd-qssn9"] Dec 05 10:42:03 crc kubenswrapper[4796]: I1205 10:42:03.901911 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:03.999946 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f02773b-d7af-447e-ab61-e59b12b5b138-config-data-custom\") pod \"barbican-api-56d7c49fdd-qssn9\" (UID: \"1f02773b-d7af-447e-ab61-e59b12b5b138\") " pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.000007 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f02773b-d7af-447e-ab61-e59b12b5b138-public-tls-certs\") pod \"barbican-api-56d7c49fdd-qssn9\" (UID: \"1f02773b-d7af-447e-ab61-e59b12b5b138\") " pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.000028 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mm9q\" (UniqueName: \"kubernetes.io/projected/1f02773b-d7af-447e-ab61-e59b12b5b138-kube-api-access-2mm9q\") pod \"barbican-api-56d7c49fdd-qssn9\" (UID: \"1f02773b-d7af-447e-ab61-e59b12b5b138\") " pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.000140 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f02773b-d7af-447e-ab61-e59b12b5b138-internal-tls-certs\") pod \"barbican-api-56d7c49fdd-qssn9\" (UID: \"1f02773b-d7af-447e-ab61-e59b12b5b138\") " pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.000190 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f02773b-d7af-447e-ab61-e59b12b5b138-logs\") pod \"barbican-api-56d7c49fdd-qssn9\" (UID: \"1f02773b-d7af-447e-ab61-e59b12b5b138\") " pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.000298 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f02773b-d7af-447e-ab61-e59b12b5b138-config-data\") pod \"barbican-api-56d7c49fdd-qssn9\" (UID: \"1f02773b-d7af-447e-ab61-e59b12b5b138\") " pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.000443 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f02773b-d7af-447e-ab61-e59b12b5b138-combined-ca-bundle\") pod \"barbican-api-56d7c49fdd-qssn9\" (UID: \"1f02773b-d7af-447e-ab61-e59b12b5b138\") " pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.018788 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-78ddb58-f7j44" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.106448 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f02773b-d7af-447e-ab61-e59b12b5b138-combined-ca-bundle\") pod \"barbican-api-56d7c49fdd-qssn9\" (UID: \"1f02773b-d7af-447e-ab61-e59b12b5b138\") " pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.106594 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f02773b-d7af-447e-ab61-e59b12b5b138-config-data-custom\") pod \"barbican-api-56d7c49fdd-qssn9\" (UID: \"1f02773b-d7af-447e-ab61-e59b12b5b138\") " pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.106620 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f02773b-d7af-447e-ab61-e59b12b5b138-public-tls-certs\") pod \"barbican-api-56d7c49fdd-qssn9\" (UID: \"1f02773b-d7af-447e-ab61-e59b12b5b138\") " pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.106641 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mm9q\" (UniqueName: \"kubernetes.io/projected/1f02773b-d7af-447e-ab61-e59b12b5b138-kube-api-access-2mm9q\") pod \"barbican-api-56d7c49fdd-qssn9\" (UID: \"1f02773b-d7af-447e-ab61-e59b12b5b138\") " pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.106660 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f02773b-d7af-447e-ab61-e59b12b5b138-internal-tls-certs\") pod \"barbican-api-56d7c49fdd-qssn9\" (UID: \"1f02773b-d7af-447e-ab61-e59b12b5b138\") " pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.106678 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f02773b-d7af-447e-ab61-e59b12b5b138-logs\") pod \"barbican-api-56d7c49fdd-qssn9\" (UID: \"1f02773b-d7af-447e-ab61-e59b12b5b138\") " pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.106732 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f02773b-d7af-447e-ab61-e59b12b5b138-config-data\") pod \"barbican-api-56d7c49fdd-qssn9\" (UID: \"1f02773b-d7af-447e-ab61-e59b12b5b138\") " pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.111599 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69fdddc9b6-2ckhp"] Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.111792 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69fdddc9b6-2ckhp" podUID="fdca92fe-39ad-41e9-978b-1757290eee03" containerName="horizon-log" containerID="cri-o://015efb0f191edca8f32d554f7c90a5e7333f1ac4f7dc4f0419abd600758422d5" gracePeriod=30 Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.112161 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69fdddc9b6-2ckhp" podUID="fdca92fe-39ad-41e9-978b-1757290eee03" containerName="horizon" containerID="cri-o://2c7054840b5c4d67e55e14bcd05d7db816598f2a9fee7b6e51d65a69305a3d62" gracePeriod=30 Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.114816 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f02773b-d7af-447e-ab61-e59b12b5b138-logs\") pod \"barbican-api-56d7c49fdd-qssn9\" (UID: \"1f02773b-d7af-447e-ab61-e59b12b5b138\") " pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.124822 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f02773b-d7af-447e-ab61-e59b12b5b138-combined-ca-bundle\") pod \"barbican-api-56d7c49fdd-qssn9\" (UID: \"1f02773b-d7af-447e-ab61-e59b12b5b138\") " pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.129708 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f02773b-d7af-447e-ab61-e59b12b5b138-public-tls-certs\") pod \"barbican-api-56d7c49fdd-qssn9\" (UID: \"1f02773b-d7af-447e-ab61-e59b12b5b138\") " pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.132737 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f02773b-d7af-447e-ab61-e59b12b5b138-internal-tls-certs\") pod \"barbican-api-56d7c49fdd-qssn9\" (UID: \"1f02773b-d7af-447e-ab61-e59b12b5b138\") " pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.133154 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f02773b-d7af-447e-ab61-e59b12b5b138-config-data-custom\") pod \"barbican-api-56d7c49fdd-qssn9\" (UID: \"1f02773b-d7af-447e-ab61-e59b12b5b138\") " pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.134652 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mm9q\" (UniqueName: \"kubernetes.io/projected/1f02773b-d7af-447e-ab61-e59b12b5b138-kube-api-access-2mm9q\") pod \"barbican-api-56d7c49fdd-qssn9\" (UID: \"1f02773b-d7af-447e-ab61-e59b12b5b138\") " pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.159526 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f02773b-d7af-447e-ab61-e59b12b5b138-config-data\") pod \"barbican-api-56d7c49fdd-qssn9\" (UID: \"1f02773b-d7af-447e-ab61-e59b12b5b138\") " pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.204970 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.666388 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-56d7c49fdd-qssn9"] Dec 05 10:42:04 crc kubenswrapper[4796]: W1205 10:42:04.671125 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f02773b_d7af_447e_ab61_e59b12b5b138.slice/crio-da8d742b39e9025acc654524e36e653854272e242dcaaf5ec17796007cb66e46 WatchSource:0}: Error finding container da8d742b39e9025acc654524e36e653854272e242dcaaf5ec17796007cb66e46: Status 404 returned error can't find the container with id da8d742b39e9025acc654524e36e653854272e242dcaaf5ec17796007cb66e46 Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.727836 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cdb46c784-6qzbh" Dec 05 10:42:04 crc kubenswrapper[4796]: I1205 10:42:04.863325 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cdb46c784-6qzbh" Dec 05 10:42:05 crc kubenswrapper[4796]: I1205 10:42:05.114823 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56d7c49fdd-qssn9" event={"ID":"1f02773b-d7af-447e-ab61-e59b12b5b138","Type":"ContainerStarted","Data":"cd74c884fc9532282e91cafb3ff6025fe3aac22dab58e3774e1661a7512d905d"} Dec 05 10:42:05 crc kubenswrapper[4796]: I1205 10:42:05.115132 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56d7c49fdd-qssn9" event={"ID":"1f02773b-d7af-447e-ab61-e59b12b5b138","Type":"ContainerStarted","Data":"dc2f05122133e0cc806ba0e2cd796e44515a6ce86881350d1e4c45968da958ab"} Dec 05 10:42:05 crc kubenswrapper[4796]: I1205 10:42:05.115149 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-56d7c49fdd-qssn9" event={"ID":"1f02773b-d7af-447e-ab61-e59b12b5b138","Type":"ContainerStarted","Data":"da8d742b39e9025acc654524e36e653854272e242dcaaf5ec17796007cb66e46"} Dec 05 10:42:05 crc kubenswrapper[4796]: I1205 10:42:05.115195 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:05 crc kubenswrapper[4796]: I1205 10:42:05.115219 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:05 crc kubenswrapper[4796]: I1205 10:42:05.814618 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pkrsm" Dec 05 10:42:05 crc kubenswrapper[4796]: I1205 10:42:05.814664 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pkrsm" Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.859271 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pkrsm" podUID="02d1fa9e-96e1-44c5-89dd-e2c619890cee" containerName="registry-server" probeResult="failure" output=< Dec 05 10:42:06 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Dec 05 10:42:06 crc kubenswrapper[4796]: > Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.920381 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c85cf4b9c-c294z" Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.938721 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75f565c8bf-4qfx5" Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.945870 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8549c6756f-mnmb6" Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.946342 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-56d7c49fdd-qssn9" podStartSLOduration=3.946314314 podStartE2EDuration="3.946314314s" podCreationTimestamp="2025-12-05 10:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:42:05.131941193 +0000 UTC m=+871.420046705" watchObservedRunningTime="2025-12-05 10:42:06.946314314 +0000 UTC m=+873.234419827" Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.970428 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/611e3579-a9f1-409e-9d3a-071a436916fd-logs\") pod \"611e3579-a9f1-409e-9d3a-071a436916fd\" (UID: \"611e3579-a9f1-409e-9d3a-071a436916fd\") " Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.970584 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjnlx\" (UniqueName: \"kubernetes.io/projected/e9dd19b1-8fb3-439c-80e1-126c13ca90da-kube-api-access-fjnlx\") pod \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\" (UID: \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\") " Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.970641 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1c11605-37fa-4897-9583-2244b3de20c1-config-data\") pod \"c1c11605-37fa-4897-9583-2244b3de20c1\" (UID: \"c1c11605-37fa-4897-9583-2244b3de20c1\") " Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.970662 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1c11605-37fa-4897-9583-2244b3de20c1-logs\") pod \"c1c11605-37fa-4897-9583-2244b3de20c1\" (UID: \"c1c11605-37fa-4897-9583-2244b3de20c1\") " Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.970714 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e9dd19b1-8fb3-439c-80e1-126c13ca90da-horizon-secret-key\") pod \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\" (UID: \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\") " Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.970740 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pggfj\" (UniqueName: \"kubernetes.io/projected/611e3579-a9f1-409e-9d3a-071a436916fd-kube-api-access-pggfj\") pod \"611e3579-a9f1-409e-9d3a-071a436916fd\" (UID: \"611e3579-a9f1-409e-9d3a-071a436916fd\") " Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.970777 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9dd19b1-8fb3-439c-80e1-126c13ca90da-config-data\") pod \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\" (UID: \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\") " Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.970799 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9dd19b1-8fb3-439c-80e1-126c13ca90da-scripts\") pod \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\" (UID: \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\") " Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.970854 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7lxj\" (UniqueName: \"kubernetes.io/projected/c1c11605-37fa-4897-9583-2244b3de20c1-kube-api-access-c7lxj\") pod \"c1c11605-37fa-4897-9583-2244b3de20c1\" (UID: \"c1c11605-37fa-4897-9583-2244b3de20c1\") " Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.970878 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/611e3579-a9f1-409e-9d3a-071a436916fd-horizon-secret-key\") pod \"611e3579-a9f1-409e-9d3a-071a436916fd\" (UID: \"611e3579-a9f1-409e-9d3a-071a436916fd\") " Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.970914 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/611e3579-a9f1-409e-9d3a-071a436916fd-config-data\") pod \"611e3579-a9f1-409e-9d3a-071a436916fd\" (UID: \"611e3579-a9f1-409e-9d3a-071a436916fd\") " Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.970929 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1c11605-37fa-4897-9583-2244b3de20c1-scripts\") pod \"c1c11605-37fa-4897-9583-2244b3de20c1\" (UID: \"c1c11605-37fa-4897-9583-2244b3de20c1\") " Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.971027 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1c11605-37fa-4897-9583-2244b3de20c1-horizon-secret-key\") pod \"c1c11605-37fa-4897-9583-2244b3de20c1\" (UID: \"c1c11605-37fa-4897-9583-2244b3de20c1\") " Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.971051 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/611e3579-a9f1-409e-9d3a-071a436916fd-scripts\") pod \"611e3579-a9f1-409e-9d3a-071a436916fd\" (UID: \"611e3579-a9f1-409e-9d3a-071a436916fd\") " Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.971107 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9dd19b1-8fb3-439c-80e1-126c13ca90da-logs\") pod \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\" (UID: \"e9dd19b1-8fb3-439c-80e1-126c13ca90da\") " Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.970920 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/611e3579-a9f1-409e-9d3a-071a436916fd-logs" (OuterVolumeSpecName: "logs") pod "611e3579-a9f1-409e-9d3a-071a436916fd" (UID: "611e3579-a9f1-409e-9d3a-071a436916fd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.971593 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9dd19b1-8fb3-439c-80e1-126c13ca90da-logs" (OuterVolumeSpecName: "logs") pod "e9dd19b1-8fb3-439c-80e1-126c13ca90da" (UID: "e9dd19b1-8fb3-439c-80e1-126c13ca90da"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.972845 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1c11605-37fa-4897-9583-2244b3de20c1-logs" (OuterVolumeSpecName: "logs") pod "c1c11605-37fa-4897-9583-2244b3de20c1" (UID: "c1c11605-37fa-4897-9583-2244b3de20c1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.974414 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/611e3579-a9f1-409e-9d3a-071a436916fd-logs\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.974435 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1c11605-37fa-4897-9583-2244b3de20c1-logs\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.974445 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9dd19b1-8fb3-439c-80e1-126c13ca90da-logs\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.978809 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/611e3579-a9f1-409e-9d3a-071a436916fd-kube-api-access-pggfj" (OuterVolumeSpecName: "kube-api-access-pggfj") pod "611e3579-a9f1-409e-9d3a-071a436916fd" (UID: "611e3579-a9f1-409e-9d3a-071a436916fd"). InnerVolumeSpecName "kube-api-access-pggfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.994018 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9dd19b1-8fb3-439c-80e1-126c13ca90da-kube-api-access-fjnlx" (OuterVolumeSpecName: "kube-api-access-fjnlx") pod "e9dd19b1-8fb3-439c-80e1-126c13ca90da" (UID: "e9dd19b1-8fb3-439c-80e1-126c13ca90da"). InnerVolumeSpecName "kube-api-access-fjnlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.995633 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9dd19b1-8fb3-439c-80e1-126c13ca90da-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e9dd19b1-8fb3-439c-80e1-126c13ca90da" (UID: "e9dd19b1-8fb3-439c-80e1-126c13ca90da"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.995808 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1c11605-37fa-4897-9583-2244b3de20c1-kube-api-access-c7lxj" (OuterVolumeSpecName: "kube-api-access-c7lxj") pod "c1c11605-37fa-4897-9583-2244b3de20c1" (UID: "c1c11605-37fa-4897-9583-2244b3de20c1"). InnerVolumeSpecName "kube-api-access-c7lxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.997766 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/611e3579-a9f1-409e-9d3a-071a436916fd-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "611e3579-a9f1-409e-9d3a-071a436916fd" (UID: "611e3579-a9f1-409e-9d3a-071a436916fd"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.999026 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9dd19b1-8fb3-439c-80e1-126c13ca90da-config-data" (OuterVolumeSpecName: "config-data") pod "e9dd19b1-8fb3-439c-80e1-126c13ca90da" (UID: "e9dd19b1-8fb3-439c-80e1-126c13ca90da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:42:06 crc kubenswrapper[4796]: I1205 10:42:06.999518 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1c11605-37fa-4897-9583-2244b3de20c1-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c1c11605-37fa-4897-9583-2244b3de20c1" (UID: "c1c11605-37fa-4897-9583-2244b3de20c1"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.007930 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/611e3579-a9f1-409e-9d3a-071a436916fd-scripts" (OuterVolumeSpecName: "scripts") pod "611e3579-a9f1-409e-9d3a-071a436916fd" (UID: "611e3579-a9f1-409e-9d3a-071a436916fd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.012383 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/611e3579-a9f1-409e-9d3a-071a436916fd-config-data" (OuterVolumeSpecName: "config-data") pod "611e3579-a9f1-409e-9d3a-071a436916fd" (UID: "611e3579-a9f1-409e-9d3a-071a436916fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.013657 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1c11605-37fa-4897-9583-2244b3de20c1-scripts" (OuterVolumeSpecName: "scripts") pod "c1c11605-37fa-4897-9583-2244b3de20c1" (UID: "c1c11605-37fa-4897-9583-2244b3de20c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.016891 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9dd19b1-8fb3-439c-80e1-126c13ca90da-scripts" (OuterVolumeSpecName: "scripts") pod "e9dd19b1-8fb3-439c-80e1-126c13ca90da" (UID: "e9dd19b1-8fb3-439c-80e1-126c13ca90da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.021314 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1c11605-37fa-4897-9583-2244b3de20c1-config-data" (OuterVolumeSpecName: "config-data") pod "c1c11605-37fa-4897-9583-2244b3de20c1" (UID: "c1c11605-37fa-4897-9583-2244b3de20c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.078440 4796 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c1c11605-37fa-4897-9583-2244b3de20c1-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.078484 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/611e3579-a9f1-409e-9d3a-071a436916fd-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.078498 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjnlx\" (UniqueName: \"kubernetes.io/projected/e9dd19b1-8fb3-439c-80e1-126c13ca90da-kube-api-access-fjnlx\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.078511 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c1c11605-37fa-4897-9583-2244b3de20c1-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.078521 4796 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e9dd19b1-8fb3-439c-80e1-126c13ca90da-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.078532 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pggfj\" (UniqueName: \"kubernetes.io/projected/611e3579-a9f1-409e-9d3a-071a436916fd-kube-api-access-pggfj\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.078541 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e9dd19b1-8fb3-439c-80e1-126c13ca90da-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.078588 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e9dd19b1-8fb3-439c-80e1-126c13ca90da-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.078601 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7lxj\" (UniqueName: \"kubernetes.io/projected/c1c11605-37fa-4897-9583-2244b3de20c1-kube-api-access-c7lxj\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.078612 4796 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/611e3579-a9f1-409e-9d3a-071a436916fd-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.078621 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/611e3579-a9f1-409e-9d3a-071a436916fd-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.078630 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c1c11605-37fa-4897-9583-2244b3de20c1-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.135469 4796 generic.go:334] "Generic (PLEG): container finished" podID="c1c11605-37fa-4897-9583-2244b3de20c1" containerID="8d3ed8886e48d0937c93a06291a691bdaed8387c4ffbf9ab4f049f5f350aacb4" exitCode=137 Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.135507 4796 generic.go:334] "Generic (PLEG): container finished" podID="c1c11605-37fa-4897-9583-2244b3de20c1" containerID="20a59c54d7fa14a46c2ec3e2c2604bd380acd900ec4e0dcfcb35b4bf823514f1" exitCode=137 Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.135598 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c85cf4b9c-c294z" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.135574 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c85cf4b9c-c294z" event={"ID":"c1c11605-37fa-4897-9583-2244b3de20c1","Type":"ContainerDied","Data":"8d3ed8886e48d0937c93a06291a691bdaed8387c4ffbf9ab4f049f5f350aacb4"} Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.135754 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c85cf4b9c-c294z" event={"ID":"c1c11605-37fa-4897-9583-2244b3de20c1","Type":"ContainerDied","Data":"20a59c54d7fa14a46c2ec3e2c2604bd380acd900ec4e0dcfcb35b4bf823514f1"} Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.135769 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c85cf4b9c-c294z" event={"ID":"c1c11605-37fa-4897-9583-2244b3de20c1","Type":"ContainerDied","Data":"fbb457bb1e2d4b7003927255fd2e253b00c6352068dfdea91c512b89bc11069b"} Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.135796 4796 scope.go:117] "RemoveContainer" containerID="8d3ed8886e48d0937c93a06291a691bdaed8387c4ffbf9ab4f049f5f350aacb4" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.137914 4796 generic.go:334] "Generic (PLEG): container finished" podID="611e3579-a9f1-409e-9d3a-071a436916fd" containerID="e4d7293a379e60bcb7cc7b73d508695005527a736fc8f795f52f71a3ef1c47b7" exitCode=137 Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.137936 4796 generic.go:334] "Generic (PLEG): container finished" podID="611e3579-a9f1-409e-9d3a-071a436916fd" containerID="aca2ca046aa59b204780b31a82bc88bd01a4ad9f53f2bfc1cc707610a214f7e4" exitCode=137 Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.138004 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8549c6756f-mnmb6" event={"ID":"611e3579-a9f1-409e-9d3a-071a436916fd","Type":"ContainerDied","Data":"e4d7293a379e60bcb7cc7b73d508695005527a736fc8f795f52f71a3ef1c47b7"} Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.138030 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8549c6756f-mnmb6" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.138037 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8549c6756f-mnmb6" event={"ID":"611e3579-a9f1-409e-9d3a-071a436916fd","Type":"ContainerDied","Data":"aca2ca046aa59b204780b31a82bc88bd01a4ad9f53f2bfc1cc707610a214f7e4"} Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.138199 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8549c6756f-mnmb6" event={"ID":"611e3579-a9f1-409e-9d3a-071a436916fd","Type":"ContainerDied","Data":"fc4105c76a74f969f58bfe8496f1e923564f7bcfd6795802cdb093702f7221d0"} Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.140841 4796 generic.go:334] "Generic (PLEG): container finished" podID="e9dd19b1-8fb3-439c-80e1-126c13ca90da" containerID="abc2e68f17f74c86686a448eacd7b2c1a7222548b9997a3839cc07751f057cbf" exitCode=137 Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.140865 4796 generic.go:334] "Generic (PLEG): container finished" podID="e9dd19b1-8fb3-439c-80e1-126c13ca90da" containerID="6581a8647abd7af5e40850e6db8eb84d5aaa95fc84edd466907731e4f75fbb6f" exitCode=137 Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.141176 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75f565c8bf-4qfx5" event={"ID":"e9dd19b1-8fb3-439c-80e1-126c13ca90da","Type":"ContainerDied","Data":"abc2e68f17f74c86686a448eacd7b2c1a7222548b9997a3839cc07751f057cbf"} Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.141217 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75f565c8bf-4qfx5" event={"ID":"e9dd19b1-8fb3-439c-80e1-126c13ca90da","Type":"ContainerDied","Data":"6581a8647abd7af5e40850e6db8eb84d5aaa95fc84edd466907731e4f75fbb6f"} Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.141242 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75f565c8bf-4qfx5" event={"ID":"e9dd19b1-8fb3-439c-80e1-126c13ca90da","Type":"ContainerDied","Data":"75989a5658acf5d16e8fa3f4a37d909fddca28cc4b534663b6602d5cee3944ac"} Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.141316 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75f565c8bf-4qfx5" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.186469 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c85cf4b9c-c294z"] Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.195299 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5c85cf4b9c-c294z"] Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.201870 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-8549c6756f-mnmb6"] Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.207059 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-8549c6756f-mnmb6"] Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.213179 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75f565c8bf-4qfx5"] Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.218353 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-75f565c8bf-4qfx5"] Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.290422 4796 scope.go:117] "RemoveContainer" containerID="20a59c54d7fa14a46c2ec3e2c2604bd380acd900ec4e0dcfcb35b4bf823514f1" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.312558 4796 scope.go:117] "RemoveContainer" containerID="8d3ed8886e48d0937c93a06291a691bdaed8387c4ffbf9ab4f049f5f350aacb4" Dec 05 10:42:07 crc kubenswrapper[4796]: E1205 10:42:07.315441 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d3ed8886e48d0937c93a06291a691bdaed8387c4ffbf9ab4f049f5f350aacb4\": container with ID starting with 8d3ed8886e48d0937c93a06291a691bdaed8387c4ffbf9ab4f049f5f350aacb4 not found: ID does not exist" containerID="8d3ed8886e48d0937c93a06291a691bdaed8387c4ffbf9ab4f049f5f350aacb4" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.315561 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d3ed8886e48d0937c93a06291a691bdaed8387c4ffbf9ab4f049f5f350aacb4"} err="failed to get container status \"8d3ed8886e48d0937c93a06291a691bdaed8387c4ffbf9ab4f049f5f350aacb4\": rpc error: code = NotFound desc = could not find container \"8d3ed8886e48d0937c93a06291a691bdaed8387c4ffbf9ab4f049f5f350aacb4\": container with ID starting with 8d3ed8886e48d0937c93a06291a691bdaed8387c4ffbf9ab4f049f5f350aacb4 not found: ID does not exist" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.315675 4796 scope.go:117] "RemoveContainer" containerID="20a59c54d7fa14a46c2ec3e2c2604bd380acd900ec4e0dcfcb35b4bf823514f1" Dec 05 10:42:07 crc kubenswrapper[4796]: E1205 10:42:07.316205 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20a59c54d7fa14a46c2ec3e2c2604bd380acd900ec4e0dcfcb35b4bf823514f1\": container with ID starting with 20a59c54d7fa14a46c2ec3e2c2604bd380acd900ec4e0dcfcb35b4bf823514f1 not found: ID does not exist" containerID="20a59c54d7fa14a46c2ec3e2c2604bd380acd900ec4e0dcfcb35b4bf823514f1" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.316347 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20a59c54d7fa14a46c2ec3e2c2604bd380acd900ec4e0dcfcb35b4bf823514f1"} err="failed to get container status \"20a59c54d7fa14a46c2ec3e2c2604bd380acd900ec4e0dcfcb35b4bf823514f1\": rpc error: code = NotFound desc = could not find container \"20a59c54d7fa14a46c2ec3e2c2604bd380acd900ec4e0dcfcb35b4bf823514f1\": container with ID starting with 20a59c54d7fa14a46c2ec3e2c2604bd380acd900ec4e0dcfcb35b4bf823514f1 not found: ID does not exist" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.316426 4796 scope.go:117] "RemoveContainer" containerID="8d3ed8886e48d0937c93a06291a691bdaed8387c4ffbf9ab4f049f5f350aacb4" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.317290 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d3ed8886e48d0937c93a06291a691bdaed8387c4ffbf9ab4f049f5f350aacb4"} err="failed to get container status \"8d3ed8886e48d0937c93a06291a691bdaed8387c4ffbf9ab4f049f5f350aacb4\": rpc error: code = NotFound desc = could not find container \"8d3ed8886e48d0937c93a06291a691bdaed8387c4ffbf9ab4f049f5f350aacb4\": container with ID starting with 8d3ed8886e48d0937c93a06291a691bdaed8387c4ffbf9ab4f049f5f350aacb4 not found: ID does not exist" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.317603 4796 scope.go:117] "RemoveContainer" containerID="20a59c54d7fa14a46c2ec3e2c2604bd380acd900ec4e0dcfcb35b4bf823514f1" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.318312 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20a59c54d7fa14a46c2ec3e2c2604bd380acd900ec4e0dcfcb35b4bf823514f1"} err="failed to get container status \"20a59c54d7fa14a46c2ec3e2c2604bd380acd900ec4e0dcfcb35b4bf823514f1\": rpc error: code = NotFound desc = could not find container \"20a59c54d7fa14a46c2ec3e2c2604bd380acd900ec4e0dcfcb35b4bf823514f1\": container with ID starting with 20a59c54d7fa14a46c2ec3e2c2604bd380acd900ec4e0dcfcb35b4bf823514f1 not found: ID does not exist" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.318345 4796 scope.go:117] "RemoveContainer" containerID="e4d7293a379e60bcb7cc7b73d508695005527a736fc8f795f52f71a3ef1c47b7" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.469740 4796 scope.go:117] "RemoveContainer" containerID="aca2ca046aa59b204780b31a82bc88bd01a4ad9f53f2bfc1cc707610a214f7e4" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.538917 4796 scope.go:117] "RemoveContainer" containerID="e4d7293a379e60bcb7cc7b73d508695005527a736fc8f795f52f71a3ef1c47b7" Dec 05 10:42:07 crc kubenswrapper[4796]: E1205 10:42:07.539257 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4d7293a379e60bcb7cc7b73d508695005527a736fc8f795f52f71a3ef1c47b7\": container with ID starting with e4d7293a379e60bcb7cc7b73d508695005527a736fc8f795f52f71a3ef1c47b7 not found: ID does not exist" containerID="e4d7293a379e60bcb7cc7b73d508695005527a736fc8f795f52f71a3ef1c47b7" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.539289 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4d7293a379e60bcb7cc7b73d508695005527a736fc8f795f52f71a3ef1c47b7"} err="failed to get container status \"e4d7293a379e60bcb7cc7b73d508695005527a736fc8f795f52f71a3ef1c47b7\": rpc error: code = NotFound desc = could not find container \"e4d7293a379e60bcb7cc7b73d508695005527a736fc8f795f52f71a3ef1c47b7\": container with ID starting with e4d7293a379e60bcb7cc7b73d508695005527a736fc8f795f52f71a3ef1c47b7 not found: ID does not exist" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.539311 4796 scope.go:117] "RemoveContainer" containerID="aca2ca046aa59b204780b31a82bc88bd01a4ad9f53f2bfc1cc707610a214f7e4" Dec 05 10:42:07 crc kubenswrapper[4796]: E1205 10:42:07.539598 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aca2ca046aa59b204780b31a82bc88bd01a4ad9f53f2bfc1cc707610a214f7e4\": container with ID starting with aca2ca046aa59b204780b31a82bc88bd01a4ad9f53f2bfc1cc707610a214f7e4 not found: ID does not exist" containerID="aca2ca046aa59b204780b31a82bc88bd01a4ad9f53f2bfc1cc707610a214f7e4" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.539649 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aca2ca046aa59b204780b31a82bc88bd01a4ad9f53f2bfc1cc707610a214f7e4"} err="failed to get container status \"aca2ca046aa59b204780b31a82bc88bd01a4ad9f53f2bfc1cc707610a214f7e4\": rpc error: code = NotFound desc = could not find container \"aca2ca046aa59b204780b31a82bc88bd01a4ad9f53f2bfc1cc707610a214f7e4\": container with ID starting with aca2ca046aa59b204780b31a82bc88bd01a4ad9f53f2bfc1cc707610a214f7e4 not found: ID does not exist" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.539677 4796 scope.go:117] "RemoveContainer" containerID="e4d7293a379e60bcb7cc7b73d508695005527a736fc8f795f52f71a3ef1c47b7" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.540087 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4d7293a379e60bcb7cc7b73d508695005527a736fc8f795f52f71a3ef1c47b7"} err="failed to get container status \"e4d7293a379e60bcb7cc7b73d508695005527a736fc8f795f52f71a3ef1c47b7\": rpc error: code = NotFound desc = could not find container \"e4d7293a379e60bcb7cc7b73d508695005527a736fc8f795f52f71a3ef1c47b7\": container with ID starting with e4d7293a379e60bcb7cc7b73d508695005527a736fc8f795f52f71a3ef1c47b7 not found: ID does not exist" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.540114 4796 scope.go:117] "RemoveContainer" containerID="aca2ca046aa59b204780b31a82bc88bd01a4ad9f53f2bfc1cc707610a214f7e4" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.541405 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aca2ca046aa59b204780b31a82bc88bd01a4ad9f53f2bfc1cc707610a214f7e4"} err="failed to get container status \"aca2ca046aa59b204780b31a82bc88bd01a4ad9f53f2bfc1cc707610a214f7e4\": rpc error: code = NotFound desc = could not find container \"aca2ca046aa59b204780b31a82bc88bd01a4ad9f53f2bfc1cc707610a214f7e4\": container with ID starting with aca2ca046aa59b204780b31a82bc88bd01a4ad9f53f2bfc1cc707610a214f7e4 not found: ID does not exist" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.541442 4796 scope.go:117] "RemoveContainer" containerID="abc2e68f17f74c86686a448eacd7b2c1a7222548b9997a3839cc07751f057cbf" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.692603 4796 scope.go:117] "RemoveContainer" containerID="6581a8647abd7af5e40850e6db8eb84d5aaa95fc84edd466907731e4f75fbb6f" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.710863 4796 scope.go:117] "RemoveContainer" containerID="abc2e68f17f74c86686a448eacd7b2c1a7222548b9997a3839cc07751f057cbf" Dec 05 10:42:07 crc kubenswrapper[4796]: E1205 10:42:07.711213 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abc2e68f17f74c86686a448eacd7b2c1a7222548b9997a3839cc07751f057cbf\": container with ID starting with abc2e68f17f74c86686a448eacd7b2c1a7222548b9997a3839cc07751f057cbf not found: ID does not exist" containerID="abc2e68f17f74c86686a448eacd7b2c1a7222548b9997a3839cc07751f057cbf" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.711267 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abc2e68f17f74c86686a448eacd7b2c1a7222548b9997a3839cc07751f057cbf"} err="failed to get container status \"abc2e68f17f74c86686a448eacd7b2c1a7222548b9997a3839cc07751f057cbf\": rpc error: code = NotFound desc = could not find container \"abc2e68f17f74c86686a448eacd7b2c1a7222548b9997a3839cc07751f057cbf\": container with ID starting with abc2e68f17f74c86686a448eacd7b2c1a7222548b9997a3839cc07751f057cbf not found: ID does not exist" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.711290 4796 scope.go:117] "RemoveContainer" containerID="6581a8647abd7af5e40850e6db8eb84d5aaa95fc84edd466907731e4f75fbb6f" Dec 05 10:42:07 crc kubenswrapper[4796]: E1205 10:42:07.711713 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6581a8647abd7af5e40850e6db8eb84d5aaa95fc84edd466907731e4f75fbb6f\": container with ID starting with 6581a8647abd7af5e40850e6db8eb84d5aaa95fc84edd466907731e4f75fbb6f not found: ID does not exist" containerID="6581a8647abd7af5e40850e6db8eb84d5aaa95fc84edd466907731e4f75fbb6f" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.711779 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6581a8647abd7af5e40850e6db8eb84d5aaa95fc84edd466907731e4f75fbb6f"} err="failed to get container status \"6581a8647abd7af5e40850e6db8eb84d5aaa95fc84edd466907731e4f75fbb6f\": rpc error: code = NotFound desc = could not find container \"6581a8647abd7af5e40850e6db8eb84d5aaa95fc84edd466907731e4f75fbb6f\": container with ID starting with 6581a8647abd7af5e40850e6db8eb84d5aaa95fc84edd466907731e4f75fbb6f not found: ID does not exist" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.711806 4796 scope.go:117] "RemoveContainer" containerID="abc2e68f17f74c86686a448eacd7b2c1a7222548b9997a3839cc07751f057cbf" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.712129 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abc2e68f17f74c86686a448eacd7b2c1a7222548b9997a3839cc07751f057cbf"} err="failed to get container status \"abc2e68f17f74c86686a448eacd7b2c1a7222548b9997a3839cc07751f057cbf\": rpc error: code = NotFound desc = could not find container \"abc2e68f17f74c86686a448eacd7b2c1a7222548b9997a3839cc07751f057cbf\": container with ID starting with abc2e68f17f74c86686a448eacd7b2c1a7222548b9997a3839cc07751f057cbf not found: ID does not exist" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.712151 4796 scope.go:117] "RemoveContainer" containerID="6581a8647abd7af5e40850e6db8eb84d5aaa95fc84edd466907731e4f75fbb6f" Dec 05 10:42:07 crc kubenswrapper[4796]: I1205 10:42:07.712375 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6581a8647abd7af5e40850e6db8eb84d5aaa95fc84edd466907731e4f75fbb6f"} err="failed to get container status \"6581a8647abd7af5e40850e6db8eb84d5aaa95fc84edd466907731e4f75fbb6f\": rpc error: code = NotFound desc = could not find container \"6581a8647abd7af5e40850e6db8eb84d5aaa95fc84edd466907731e4f75fbb6f\": container with ID starting with 6581a8647abd7af5e40850e6db8eb84d5aaa95fc84edd466907731e4f75fbb6f not found: ID does not exist" Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.039339 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="611e3579-a9f1-409e-9d3a-071a436916fd" path="/var/lib/kubelet/pods/611e3579-a9f1-409e-9d3a-071a436916fd/volumes" Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.040358 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1c11605-37fa-4897-9583-2244b3de20c1" path="/var/lib/kubelet/pods/c1c11605-37fa-4897-9583-2244b3de20c1/volumes" Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.040971 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9dd19b1-8fb3-439c-80e1-126c13ca90da" path="/var/lib/kubelet/pods/e9dd19b1-8fb3-439c-80e1-126c13ca90da/volumes" Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.148521 4796 generic.go:334] "Generic (PLEG): container finished" podID="fdca92fe-39ad-41e9-978b-1757290eee03" containerID="2c7054840b5c4d67e55e14bcd05d7db816598f2a9fee7b6e51d65a69305a3d62" exitCode=0 Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.148578 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69fdddc9b6-2ckhp" event={"ID":"fdca92fe-39ad-41e9-978b-1757290eee03","Type":"ContainerDied","Data":"2c7054840b5c4d67e55e14bcd05d7db816598f2a9fee7b6e51d65a69305a3d62"} Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.180320 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.235969 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74fd8b655f-6pbcd"] Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.236195 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" podUID="370c8f58-d257-4e3f-a54a-d34231b6dfd5" containerName="dnsmasq-dns" containerID="cri-o://1e3543bd0ac25f2ce11f78c723f86ec480f8eb8ac419fd5b4a86beaf9da6ed74" gracePeriod=10 Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.678648 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.716379 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8nv5\" (UniqueName: \"kubernetes.io/projected/370c8f58-d257-4e3f-a54a-d34231b6dfd5-kube-api-access-q8nv5\") pod \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.716508 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-dns-svc\") pod \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.716573 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-ovsdbserver-nb\") pod \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.716749 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-dns-swift-storage-0\") pod \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.716807 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-config\") pod \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.716863 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-ovsdbserver-sb\") pod \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\" (UID: \"370c8f58-d257-4e3f-a54a-d34231b6dfd5\") " Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.724982 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/370c8f58-d257-4e3f-a54a-d34231b6dfd5-kube-api-access-q8nv5" (OuterVolumeSpecName: "kube-api-access-q8nv5") pod "370c8f58-d257-4e3f-a54a-d34231b6dfd5" (UID: "370c8f58-d257-4e3f-a54a-d34231b6dfd5"). InnerVolumeSpecName "kube-api-access-q8nv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.765076 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "370c8f58-d257-4e3f-a54a-d34231b6dfd5" (UID: "370c8f58-d257-4e3f-a54a-d34231b6dfd5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.766038 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "370c8f58-d257-4e3f-a54a-d34231b6dfd5" (UID: "370c8f58-d257-4e3f-a54a-d34231b6dfd5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.775665 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "370c8f58-d257-4e3f-a54a-d34231b6dfd5" (UID: "370c8f58-d257-4e3f-a54a-d34231b6dfd5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.780040 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-config" (OuterVolumeSpecName: "config") pod "370c8f58-d257-4e3f-a54a-d34231b6dfd5" (UID: "370c8f58-d257-4e3f-a54a-d34231b6dfd5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.783246 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "370c8f58-d257-4e3f-a54a-d34231b6dfd5" (UID: "370c8f58-d257-4e3f-a54a-d34231b6dfd5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.820060 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.820105 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.820116 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.820125 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8nv5\" (UniqueName: \"kubernetes.io/projected/370c8f58-d257-4e3f-a54a-d34231b6dfd5-kube-api-access-q8nv5\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.820138 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:08 crc kubenswrapper[4796]: I1205 10:42:08.820147 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/370c8f58-d257-4e3f-a54a-d34231b6dfd5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:09 crc kubenswrapper[4796]: I1205 10:42:09.177617 4796 generic.go:334] "Generic (PLEG): container finished" podID="370c8f58-d257-4e3f-a54a-d34231b6dfd5" containerID="1e3543bd0ac25f2ce11f78c723f86ec480f8eb8ac419fd5b4a86beaf9da6ed74" exitCode=0 Dec 05 10:42:09 crc kubenswrapper[4796]: I1205 10:42:09.177664 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" event={"ID":"370c8f58-d257-4e3f-a54a-d34231b6dfd5","Type":"ContainerDied","Data":"1e3543bd0ac25f2ce11f78c723f86ec480f8eb8ac419fd5b4a86beaf9da6ed74"} Dec 05 10:42:09 crc kubenswrapper[4796]: I1205 10:42:09.177702 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" Dec 05 10:42:09 crc kubenswrapper[4796]: I1205 10:42:09.177728 4796 scope.go:117] "RemoveContainer" containerID="1e3543bd0ac25f2ce11f78c723f86ec480f8eb8ac419fd5b4a86beaf9da6ed74" Dec 05 10:42:09 crc kubenswrapper[4796]: I1205 10:42:09.177715 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74fd8b655f-6pbcd" event={"ID":"370c8f58-d257-4e3f-a54a-d34231b6dfd5","Type":"ContainerDied","Data":"7638046189bbf1c1afbdb1b880e871e2a9fc6e030ecf89d5812b6703920d655b"} Dec 05 10:42:09 crc kubenswrapper[4796]: I1205 10:42:09.254734 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74fd8b655f-6pbcd"] Dec 05 10:42:09 crc kubenswrapper[4796]: I1205 10:42:09.254925 4796 scope.go:117] "RemoveContainer" containerID="46af83226b845395767247cdc2028373b44feee84b15c1422749bdd4bfd826b7" Dec 05 10:42:09 crc kubenswrapper[4796]: I1205 10:42:09.291891 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74fd8b655f-6pbcd"] Dec 05 10:42:09 crc kubenswrapper[4796]: I1205 10:42:09.327608 4796 scope.go:117] "RemoveContainer" containerID="1e3543bd0ac25f2ce11f78c723f86ec480f8eb8ac419fd5b4a86beaf9da6ed74" Dec 05 10:42:09 crc kubenswrapper[4796]: E1205 10:42:09.328130 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e3543bd0ac25f2ce11f78c723f86ec480f8eb8ac419fd5b4a86beaf9da6ed74\": container with ID starting with 1e3543bd0ac25f2ce11f78c723f86ec480f8eb8ac419fd5b4a86beaf9da6ed74 not found: ID does not exist" containerID="1e3543bd0ac25f2ce11f78c723f86ec480f8eb8ac419fd5b4a86beaf9da6ed74" Dec 05 10:42:09 crc kubenswrapper[4796]: I1205 10:42:09.328189 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e3543bd0ac25f2ce11f78c723f86ec480f8eb8ac419fd5b4a86beaf9da6ed74"} err="failed to get container status \"1e3543bd0ac25f2ce11f78c723f86ec480f8eb8ac419fd5b4a86beaf9da6ed74\": rpc error: code = NotFound desc = could not find container \"1e3543bd0ac25f2ce11f78c723f86ec480f8eb8ac419fd5b4a86beaf9da6ed74\": container with ID starting with 1e3543bd0ac25f2ce11f78c723f86ec480f8eb8ac419fd5b4a86beaf9da6ed74 not found: ID does not exist" Dec 05 10:42:09 crc kubenswrapper[4796]: I1205 10:42:09.328222 4796 scope.go:117] "RemoveContainer" containerID="46af83226b845395767247cdc2028373b44feee84b15c1422749bdd4bfd826b7" Dec 05 10:42:09 crc kubenswrapper[4796]: E1205 10:42:09.328559 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46af83226b845395767247cdc2028373b44feee84b15c1422749bdd4bfd826b7\": container with ID starting with 46af83226b845395767247cdc2028373b44feee84b15c1422749bdd4bfd826b7 not found: ID does not exist" containerID="46af83226b845395767247cdc2028373b44feee84b15c1422749bdd4bfd826b7" Dec 05 10:42:09 crc kubenswrapper[4796]: I1205 10:42:09.328805 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46af83226b845395767247cdc2028373b44feee84b15c1422749bdd4bfd826b7"} err="failed to get container status \"46af83226b845395767247cdc2028373b44feee84b15c1422749bdd4bfd826b7\": rpc error: code = NotFound desc = could not find container \"46af83226b845395767247cdc2028373b44feee84b15c1422749bdd4bfd826b7\": container with ID starting with 46af83226b845395767247cdc2028373b44feee84b15c1422749bdd4bfd826b7 not found: ID does not exist" Dec 05 10:42:10 crc kubenswrapper[4796]: I1205 10:42:10.048488 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="370c8f58-d257-4e3f-a54a-d34231b6dfd5" path="/var/lib/kubelet/pods/370c8f58-d257-4e3f-a54a-d34231b6dfd5/volumes" Dec 05 10:42:10 crc kubenswrapper[4796]: I1205 10:42:10.193478 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qh5mr" event={"ID":"04e95ca1-d131-4528-baaf-0be6b98a5edf","Type":"ContainerStarted","Data":"a6eccc45b6647bef041c33d62a9fc5f67eab2da4a5c4f8e725cd959619383c16"} Dec 05 10:42:10 crc kubenswrapper[4796]: I1205 10:42:10.221097 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-qh5mr" podStartSLOduration=5.243384245 podStartE2EDuration="39.221077157s" podCreationTimestamp="2025-12-05 10:41:31 +0000 UTC" firstStartedPulling="2025-12-05 10:41:35.653356837 +0000 UTC m=+841.941462351" lastFinishedPulling="2025-12-05 10:42:09.631049749 +0000 UTC m=+875.919155263" observedRunningTime="2025-12-05 10:42:10.210383337 +0000 UTC m=+876.498488850" watchObservedRunningTime="2025-12-05 10:42:10.221077157 +0000 UTC m=+876.509182670" Dec 05 10:42:10 crc kubenswrapper[4796]: I1205 10:42:10.552250 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69fdddc9b6-2ckhp" podUID="fdca92fe-39ad-41e9-978b-1757290eee03" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.140:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.140:8443: connect: connection refused" Dec 05 10:42:10 crc kubenswrapper[4796]: I1205 10:42:10.608995 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:11 crc kubenswrapper[4796]: I1205 10:42:11.910204 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-56d7c49fdd-qssn9" Dec 05 10:42:11 crc kubenswrapper[4796]: I1205 10:42:11.975878 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5cdb46c784-6qzbh"] Dec 05 10:42:11 crc kubenswrapper[4796]: I1205 10:42:11.976074 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5cdb46c784-6qzbh" podUID="0022df39-ef89-40d9-9be4-5297d4bd6dc5" containerName="barbican-api-log" containerID="cri-o://956720d81920982a3e64e227e1218a04fd6f867cf91263ad9cd1f8e1c197bcba" gracePeriod=30 Dec 05 10:42:11 crc kubenswrapper[4796]: I1205 10:42:11.976420 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5cdb46c784-6qzbh" podUID="0022df39-ef89-40d9-9be4-5297d4bd6dc5" containerName="barbican-api" containerID="cri-o://d9a68af4d90a621eff783dc744beea386f5dbed9856d09adbdab0b03ed810dcb" gracePeriod=30 Dec 05 10:42:12 crc kubenswrapper[4796]: I1205 10:42:12.212058 4796 generic.go:334] "Generic (PLEG): container finished" podID="0022df39-ef89-40d9-9be4-5297d4bd6dc5" containerID="956720d81920982a3e64e227e1218a04fd6f867cf91263ad9cd1f8e1c197bcba" exitCode=143 Dec 05 10:42:12 crc kubenswrapper[4796]: I1205 10:42:12.212096 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cdb46c784-6qzbh" event={"ID":"0022df39-ef89-40d9-9be4-5297d4bd6dc5","Type":"ContainerDied","Data":"956720d81920982a3e64e227e1218a04fd6f867cf91263ad9cd1f8e1c197bcba"} Dec 05 10:42:13 crc kubenswrapper[4796]: I1205 10:42:13.225429 4796 generic.go:334] "Generic (PLEG): container finished" podID="04e95ca1-d131-4528-baaf-0be6b98a5edf" containerID="a6eccc45b6647bef041c33d62a9fc5f67eab2da4a5c4f8e725cd959619383c16" exitCode=0 Dec 05 10:42:13 crc kubenswrapper[4796]: I1205 10:42:13.225561 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qh5mr" event={"ID":"04e95ca1-d131-4528-baaf-0be6b98a5edf","Type":"ContainerDied","Data":"a6eccc45b6647bef041c33d62a9fc5f67eab2da4a5c4f8e725cd959619383c16"} Dec 05 10:42:14 crc kubenswrapper[4796]: I1205 10:42:14.417173 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5dbb66bc57-9lclx" Dec 05 10:42:14 crc kubenswrapper[4796]: I1205 10:42:14.882369 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-644c454648-8vjkb" Dec 05 10:42:14 crc kubenswrapper[4796]: I1205 10:42:14.884302 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-644c454648-8vjkb" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.155443 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5cdb46c784-6qzbh" podUID="0022df39-ef89-40d9-9be4-5297d4bd6dc5" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:56236->10.217.0.156:9311: read: connection reset by peer" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.155730 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5cdb46c784-6qzbh" podUID="0022df39-ef89-40d9-9be4-5297d4bd6dc5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:56230->10.217.0.156:9311: read: connection reset by peer" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.259657 4796 generic.go:334] "Generic (PLEG): container finished" podID="0022df39-ef89-40d9-9be4-5297d4bd6dc5" containerID="d9a68af4d90a621eff783dc744beea386f5dbed9856d09adbdab0b03ed810dcb" exitCode=0 Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.260256 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cdb46c784-6qzbh" event={"ID":"0022df39-ef89-40d9-9be4-5297d4bd6dc5","Type":"ContainerDied","Data":"d9a68af4d90a621eff783dc744beea386f5dbed9856d09adbdab0b03ed810dcb"} Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.310354 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qh5mr" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.389227 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-db-sync-config-data\") pod \"04e95ca1-d131-4528-baaf-0be6b98a5edf\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.389606 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-config-data\") pod \"04e95ca1-d131-4528-baaf-0be6b98a5edf\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.389791 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-combined-ca-bundle\") pod \"04e95ca1-d131-4528-baaf-0be6b98a5edf\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.389843 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-scripts\") pod \"04e95ca1-d131-4528-baaf-0be6b98a5edf\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.389974 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h497x\" (UniqueName: \"kubernetes.io/projected/04e95ca1-d131-4528-baaf-0be6b98a5edf-kube-api-access-h497x\") pod \"04e95ca1-d131-4528-baaf-0be6b98a5edf\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.390025 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04e95ca1-d131-4528-baaf-0be6b98a5edf-etc-machine-id\") pod \"04e95ca1-d131-4528-baaf-0be6b98a5edf\" (UID: \"04e95ca1-d131-4528-baaf-0be6b98a5edf\") " Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.390903 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04e95ca1-d131-4528-baaf-0be6b98a5edf-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "04e95ca1-d131-4528-baaf-0be6b98a5edf" (UID: "04e95ca1-d131-4528-baaf-0be6b98a5edf"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.396842 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "04e95ca1-d131-4528-baaf-0be6b98a5edf" (UID: "04e95ca1-d131-4528-baaf-0be6b98a5edf"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.396868 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04e95ca1-d131-4528-baaf-0be6b98a5edf-kube-api-access-h497x" (OuterVolumeSpecName: "kube-api-access-h497x") pod "04e95ca1-d131-4528-baaf-0be6b98a5edf" (UID: "04e95ca1-d131-4528-baaf-0be6b98a5edf"). InnerVolumeSpecName "kube-api-access-h497x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.399010 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-scripts" (OuterVolumeSpecName: "scripts") pod "04e95ca1-d131-4528-baaf-0be6b98a5edf" (UID: "04e95ca1-d131-4528-baaf-0be6b98a5edf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.424859 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04e95ca1-d131-4528-baaf-0be6b98a5edf" (UID: "04e95ca1-d131-4528-baaf-0be6b98a5edf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.474699 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-config-data" (OuterVolumeSpecName: "config-data") pod "04e95ca1-d131-4528-baaf-0be6b98a5edf" (UID: "04e95ca1-d131-4528-baaf-0be6b98a5edf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.495205 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h497x\" (UniqueName: \"kubernetes.io/projected/04e95ca1-d131-4528-baaf-0be6b98a5edf-kube-api-access-h497x\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.495232 4796 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04e95ca1-d131-4528-baaf-0be6b98a5edf-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.495250 4796 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.495258 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.495267 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.495275 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04e95ca1-d131-4528-baaf-0be6b98a5edf-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.567559 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cdb46c784-6qzbh" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.597049 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0022df39-ef89-40d9-9be4-5297d4bd6dc5-config-data\") pod \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\" (UID: \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\") " Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.597124 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0022df39-ef89-40d9-9be4-5297d4bd6dc5-logs\") pod \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\" (UID: \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\") " Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.597310 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4m9jk\" (UniqueName: \"kubernetes.io/projected/0022df39-ef89-40d9-9be4-5297d4bd6dc5-kube-api-access-4m9jk\") pod \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\" (UID: \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\") " Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.597383 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0022df39-ef89-40d9-9be4-5297d4bd6dc5-combined-ca-bundle\") pod \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\" (UID: \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\") " Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.597460 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0022df39-ef89-40d9-9be4-5297d4bd6dc5-config-data-custom\") pod \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\" (UID: \"0022df39-ef89-40d9-9be4-5297d4bd6dc5\") " Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.598759 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0022df39-ef89-40d9-9be4-5297d4bd6dc5-logs" (OuterVolumeSpecName: "logs") pod "0022df39-ef89-40d9-9be4-5297d4bd6dc5" (UID: "0022df39-ef89-40d9-9be4-5297d4bd6dc5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.602642 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0022df39-ef89-40d9-9be4-5297d4bd6dc5-kube-api-access-4m9jk" (OuterVolumeSpecName: "kube-api-access-4m9jk") pod "0022df39-ef89-40d9-9be4-5297d4bd6dc5" (UID: "0022df39-ef89-40d9-9be4-5297d4bd6dc5"). InnerVolumeSpecName "kube-api-access-4m9jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.602837 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0022df39-ef89-40d9-9be4-5297d4bd6dc5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0022df39-ef89-40d9-9be4-5297d4bd6dc5" (UID: "0022df39-ef89-40d9-9be4-5297d4bd6dc5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.633516 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0022df39-ef89-40d9-9be4-5297d4bd6dc5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0022df39-ef89-40d9-9be4-5297d4bd6dc5" (UID: "0022df39-ef89-40d9-9be4-5297d4bd6dc5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.654777 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0022df39-ef89-40d9-9be4-5297d4bd6dc5-config-data" (OuterVolumeSpecName: "config-data") pod "0022df39-ef89-40d9-9be4-5297d4bd6dc5" (UID: "0022df39-ef89-40d9-9be4-5297d4bd6dc5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.699796 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4m9jk\" (UniqueName: \"kubernetes.io/projected/0022df39-ef89-40d9-9be4-5297d4bd6dc5-kube-api-access-4m9jk\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.700043 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0022df39-ef89-40d9-9be4-5297d4bd6dc5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.700054 4796 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0022df39-ef89-40d9-9be4-5297d4bd6dc5-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.700064 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0022df39-ef89-40d9-9be4-5297d4bd6dc5-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.700074 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0022df39-ef89-40d9-9be4-5297d4bd6dc5-logs\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.762639 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 05 10:42:15 crc kubenswrapper[4796]: E1205 10:42:15.762971 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="611e3579-a9f1-409e-9d3a-071a436916fd" containerName="horizon" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.762988 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="611e3579-a9f1-409e-9d3a-071a436916fd" containerName="horizon" Dec 05 10:42:15 crc kubenswrapper[4796]: E1205 10:42:15.762998 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9dd19b1-8fb3-439c-80e1-126c13ca90da" containerName="horizon" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.763005 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9dd19b1-8fb3-439c-80e1-126c13ca90da" containerName="horizon" Dec 05 10:42:15 crc kubenswrapper[4796]: E1205 10:42:15.763029 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9dd19b1-8fb3-439c-80e1-126c13ca90da" containerName="horizon-log" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.763035 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9dd19b1-8fb3-439c-80e1-126c13ca90da" containerName="horizon-log" Dec 05 10:42:15 crc kubenswrapper[4796]: E1205 10:42:15.763042 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="370c8f58-d257-4e3f-a54a-d34231b6dfd5" containerName="dnsmasq-dns" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.763047 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="370c8f58-d257-4e3f-a54a-d34231b6dfd5" containerName="dnsmasq-dns" Dec 05 10:42:15 crc kubenswrapper[4796]: E1205 10:42:15.763057 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1c11605-37fa-4897-9583-2244b3de20c1" containerName="horizon" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.763062 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c11605-37fa-4897-9583-2244b3de20c1" containerName="horizon" Dec 05 10:42:15 crc kubenswrapper[4796]: E1205 10:42:15.763078 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04e95ca1-d131-4528-baaf-0be6b98a5edf" containerName="cinder-db-sync" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.763083 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="04e95ca1-d131-4528-baaf-0be6b98a5edf" containerName="cinder-db-sync" Dec 05 10:42:15 crc kubenswrapper[4796]: E1205 10:42:15.763096 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="370c8f58-d257-4e3f-a54a-d34231b6dfd5" containerName="init" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.763101 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="370c8f58-d257-4e3f-a54a-d34231b6dfd5" containerName="init" Dec 05 10:42:15 crc kubenswrapper[4796]: E1205 10:42:15.763107 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0022df39-ef89-40d9-9be4-5297d4bd6dc5" containerName="barbican-api-log" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.763112 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="0022df39-ef89-40d9-9be4-5297d4bd6dc5" containerName="barbican-api-log" Dec 05 10:42:15 crc kubenswrapper[4796]: E1205 10:42:15.763120 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1c11605-37fa-4897-9583-2244b3de20c1" containerName="horizon-log" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.763125 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1c11605-37fa-4897-9583-2244b3de20c1" containerName="horizon-log" Dec 05 10:42:15 crc kubenswrapper[4796]: E1205 10:42:15.763134 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0022df39-ef89-40d9-9be4-5297d4bd6dc5" containerName="barbican-api" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.763140 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="0022df39-ef89-40d9-9be4-5297d4bd6dc5" containerName="barbican-api" Dec 05 10:42:15 crc kubenswrapper[4796]: E1205 10:42:15.763151 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="611e3579-a9f1-409e-9d3a-071a436916fd" containerName="horizon-log" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.763156 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="611e3579-a9f1-409e-9d3a-071a436916fd" containerName="horizon-log" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.763321 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1c11605-37fa-4897-9583-2244b3de20c1" containerName="horizon-log" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.763337 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9dd19b1-8fb3-439c-80e1-126c13ca90da" containerName="horizon-log" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.763347 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="611e3579-a9f1-409e-9d3a-071a436916fd" containerName="horizon" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.763355 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="04e95ca1-d131-4528-baaf-0be6b98a5edf" containerName="cinder-db-sync" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.763363 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="611e3579-a9f1-409e-9d3a-071a436916fd" containerName="horizon-log" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.763375 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="370c8f58-d257-4e3f-a54a-d34231b6dfd5" containerName="dnsmasq-dns" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.763386 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9dd19b1-8fb3-439c-80e1-126c13ca90da" containerName="horizon" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.763393 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="0022df39-ef89-40d9-9be4-5297d4bd6dc5" containerName="barbican-api" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.763401 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1c11605-37fa-4897-9583-2244b3de20c1" containerName="horizon" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.763407 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="0022df39-ef89-40d9-9be4-5297d4bd6dc5" containerName="barbican-api-log" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.763929 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.771001 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.771092 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-q8kb6" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.771799 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.783532 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.801340 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59k4p\" (UniqueName: \"kubernetes.io/projected/127c1064-4744-43ab-afb3-91c03cee795d-kube-api-access-59k4p\") pod \"openstackclient\" (UID: \"127c1064-4744-43ab-afb3-91c03cee795d\") " pod="openstack/openstackclient" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.801432 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127c1064-4744-43ab-afb3-91c03cee795d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"127c1064-4744-43ab-afb3-91c03cee795d\") " pod="openstack/openstackclient" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.801456 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/127c1064-4744-43ab-afb3-91c03cee795d-openstack-config-secret\") pod \"openstackclient\" (UID: \"127c1064-4744-43ab-afb3-91c03cee795d\") " pod="openstack/openstackclient" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.801510 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/127c1064-4744-43ab-afb3-91c03cee795d-openstack-config\") pod \"openstackclient\" (UID: \"127c1064-4744-43ab-afb3-91c03cee795d\") " pod="openstack/openstackclient" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.902396 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/127c1064-4744-43ab-afb3-91c03cee795d-openstack-config\") pod \"openstackclient\" (UID: \"127c1064-4744-43ab-afb3-91c03cee795d\") " pod="openstack/openstackclient" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.902457 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59k4p\" (UniqueName: \"kubernetes.io/projected/127c1064-4744-43ab-afb3-91c03cee795d-kube-api-access-59k4p\") pod \"openstackclient\" (UID: \"127c1064-4744-43ab-afb3-91c03cee795d\") " pod="openstack/openstackclient" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.902528 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127c1064-4744-43ab-afb3-91c03cee795d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"127c1064-4744-43ab-afb3-91c03cee795d\") " pod="openstack/openstackclient" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.902551 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/127c1064-4744-43ab-afb3-91c03cee795d-openstack-config-secret\") pod \"openstackclient\" (UID: \"127c1064-4744-43ab-afb3-91c03cee795d\") " pod="openstack/openstackclient" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.903943 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/127c1064-4744-43ab-afb3-91c03cee795d-openstack-config\") pod \"openstackclient\" (UID: \"127c1064-4744-43ab-afb3-91c03cee795d\") " pod="openstack/openstackclient" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.906707 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127c1064-4744-43ab-afb3-91c03cee795d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"127c1064-4744-43ab-afb3-91c03cee795d\") " pod="openstack/openstackclient" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.906938 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/127c1064-4744-43ab-afb3-91c03cee795d-openstack-config-secret\") pod \"openstackclient\" (UID: \"127c1064-4744-43ab-afb3-91c03cee795d\") " pod="openstack/openstackclient" Dec 05 10:42:15 crc kubenswrapper[4796]: I1205 10:42:15.918315 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59k4p\" (UniqueName: \"kubernetes.io/projected/127c1064-4744-43ab-afb3-91c03cee795d-kube-api-access-59k4p\") pod \"openstackclient\" (UID: \"127c1064-4744-43ab-afb3-91c03cee795d\") " pod="openstack/openstackclient" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.079653 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.275385 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qh5mr" event={"ID":"04e95ca1-d131-4528-baaf-0be6b98a5edf","Type":"ContainerDied","Data":"79b9cfebc3f4e1bf5afe11b73bc9b634932f2a64d06ff4ca7847a224c7599d91"} Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.275579 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79b9cfebc3f4e1bf5afe11b73bc9b634932f2a64d06ff4ca7847a224c7599d91" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.275475 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qh5mr" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.278662 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cdb46c784-6qzbh" event={"ID":"0022df39-ef89-40d9-9be4-5297d4bd6dc5","Type":"ContainerDied","Data":"b5c6a72ec5990afdcc8a05cc5ef1e99eccc71baaad77d4083cf3920f07f94a97"} Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.278716 4796 scope.go:117] "RemoveContainer" containerID="d9a68af4d90a621eff783dc744beea386f5dbed9856d09adbdab0b03ed810dcb" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.278831 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cdb46c784-6qzbh" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.285450 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01d9c19c-9758-4d61-8a0d-53868923bfea","Type":"ContainerStarted","Data":"3964bd1b78914f6f0270dabbf0044afe0892a561a4e638576ac62bd897b61cdf"} Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.285582 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01d9c19c-9758-4d61-8a0d-53868923bfea" containerName="ceilometer-central-agent" containerID="cri-o://186ef79ea79d9794349c468ee4c95e9aaa2df944f728755ad975f9f87a71034c" gracePeriod=30 Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.285744 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.285899 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01d9c19c-9758-4d61-8a0d-53868923bfea" containerName="proxy-httpd" containerID="cri-o://3964bd1b78914f6f0270dabbf0044afe0892a561a4e638576ac62bd897b61cdf" gracePeriod=30 Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.286048 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01d9c19c-9758-4d61-8a0d-53868923bfea" containerName="sg-core" containerID="cri-o://0e524c79406e51179c86d02cbd0cdbabb5530937449e821cb2f21f39994ef255" gracePeriod=30 Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.286055 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01d9c19c-9758-4d61-8a0d-53868923bfea" containerName="ceilometer-notification-agent" containerID="cri-o://31c2a93ccbce05c86cfaf88da621386eded154a39766c613e85e54c6d48448d6" gracePeriod=30 Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.304419 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.507555955 podStartE2EDuration="56.304402274s" podCreationTimestamp="2025-12-05 10:41:20 +0000 UTC" firstStartedPulling="2025-12-05 10:41:21.564289389 +0000 UTC m=+827.852394902" lastFinishedPulling="2025-12-05 10:42:15.361135708 +0000 UTC m=+881.649241221" observedRunningTime="2025-12-05 10:42:16.301214928 +0000 UTC m=+882.589320451" watchObservedRunningTime="2025-12-05 10:42:16.304402274 +0000 UTC m=+882.592507788" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.318964 4796 scope.go:117] "RemoveContainer" containerID="956720d81920982a3e64e227e1218a04fd6f867cf91263ad9cd1f8e1c197bcba" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.326757 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5cdb46c784-6qzbh"] Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.335641 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5cdb46c784-6qzbh"] Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.476194 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 05 10:42:16 crc kubenswrapper[4796]: W1205 10:42:16.485062 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod127c1064_4744_43ab_afb3_91c03cee795d.slice/crio-14448af3ce5bfe3f990e5aa41546c74cb5da3680dc5b78b456c6f91fee56797b WatchSource:0}: Error finding container 14448af3ce5bfe3f990e5aa41546c74cb5da3680dc5b78b456c6f91fee56797b: Status 404 returned error can't find the container with id 14448af3ce5bfe3f990e5aa41546c74cb5da3680dc5b78b456c6f91fee56797b Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.542063 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.544266 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.550963 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.551857 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-sw6dh" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.552127 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.552602 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.568870 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.611257 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c77d8b67c-xmzls"] Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.612420 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.612496 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-scripts\") pod \"cinder-scheduler-0\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.612574 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.612604 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-config-data\") pod \"cinder-scheduler-0\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.612637 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29lk5\" (UniqueName: \"kubernetes.io/projected/51706a1f-8c53-4031-9c47-4aef53d51260-kube-api-access-29lk5\") pod \"cinder-scheduler-0\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.612655 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51706a1f-8c53-4031-9c47-4aef53d51260-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.617581 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.627227 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c77d8b67c-xmzls"] Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.724297 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51706a1f-8c53-4031-9c47-4aef53d51260-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.724369 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-config\") pod \"dnsmasq-dns-5c77d8b67c-xmzls\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.724401 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.724444 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh5v8\" (UniqueName: \"kubernetes.io/projected/95c9a914-32f8-4d28-8791-91c548000b4a-kube-api-access-wh5v8\") pod \"dnsmasq-dns-5c77d8b67c-xmzls\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.724480 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-scripts\") pod \"cinder-scheduler-0\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.724553 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-dns-svc\") pod \"dnsmasq-dns-5c77d8b67c-xmzls\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.724576 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.724597 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-ovsdbserver-nb\") pod \"dnsmasq-dns-5c77d8b67c-xmzls\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.724622 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-config-data\") pod \"cinder-scheduler-0\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.724655 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-dns-swift-storage-0\") pod \"dnsmasq-dns-5c77d8b67c-xmzls\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.724703 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29lk5\" (UniqueName: \"kubernetes.io/projected/51706a1f-8c53-4031-9c47-4aef53d51260-kube-api-access-29lk5\") pod \"cinder-scheduler-0\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.724728 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-ovsdbserver-sb\") pod \"dnsmasq-dns-5c77d8b67c-xmzls\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.724948 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51706a1f-8c53-4031-9c47-4aef53d51260-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.732574 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.733708 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-config-data\") pod \"cinder-scheduler-0\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.733993 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-scripts\") pod \"cinder-scheduler-0\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.734485 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.745184 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29lk5\" (UniqueName: \"kubernetes.io/projected/51706a1f-8c53-4031-9c47-4aef53d51260-kube-api-access-29lk5\") pod \"cinder-scheduler-0\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.798046 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.800000 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.804736 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.816025 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.826351 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-dns-svc\") pod \"dnsmasq-dns-5c77d8b67c-xmzls\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.826408 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-ovsdbserver-nb\") pod \"dnsmasq-dns-5c77d8b67c-xmzls\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.826458 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcc49\" (UniqueName: \"kubernetes.io/projected/9790bed7-d42e-42a3-8a68-6449ad28edb9-kube-api-access-vcc49\") pod \"cinder-api-0\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " pod="openstack/cinder-api-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.826490 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-dns-swift-storage-0\") pod \"dnsmasq-dns-5c77d8b67c-xmzls\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.826523 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-ovsdbserver-sb\") pod \"dnsmasq-dns-5c77d8b67c-xmzls\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.826544 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9790bed7-d42e-42a3-8a68-6449ad28edb9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " pod="openstack/cinder-api-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.826580 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-config\") pod \"dnsmasq-dns-5c77d8b67c-xmzls\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.826636 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9790bed7-d42e-42a3-8a68-6449ad28edb9-logs\") pod \"cinder-api-0\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " pod="openstack/cinder-api-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.826654 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh5v8\" (UniqueName: \"kubernetes.io/projected/95c9a914-32f8-4d28-8791-91c548000b4a-kube-api-access-wh5v8\") pod \"dnsmasq-dns-5c77d8b67c-xmzls\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.826733 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-scripts\") pod \"cinder-api-0\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " pod="openstack/cinder-api-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.826775 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-config-data-custom\") pod \"cinder-api-0\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " pod="openstack/cinder-api-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.826797 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-config-data\") pod \"cinder-api-0\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " pod="openstack/cinder-api-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.826829 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " pod="openstack/cinder-api-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.827740 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-dns-svc\") pod \"dnsmasq-dns-5c77d8b67c-xmzls\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.828403 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-config\") pod \"dnsmasq-dns-5c77d8b67c-xmzls\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.828629 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-dns-swift-storage-0\") pod \"dnsmasq-dns-5c77d8b67c-xmzls\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.829119 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-ovsdbserver-sb\") pod \"dnsmasq-dns-5c77d8b67c-xmzls\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.830205 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-ovsdbserver-nb\") pod \"dnsmasq-dns-5c77d8b67c-xmzls\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.844359 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh5v8\" (UniqueName: \"kubernetes.io/projected/95c9a914-32f8-4d28-8791-91c548000b4a-kube-api-access-wh5v8\") pod \"dnsmasq-dns-5c77d8b67c-xmzls\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.853771 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pkrsm" podUID="02d1fa9e-96e1-44c5-89dd-e2c619890cee" containerName="registry-server" probeResult="failure" output=< Dec 05 10:42:16 crc kubenswrapper[4796]: timeout: failed to connect service ":50051" within 1s Dec 05 10:42:16 crc kubenswrapper[4796]: > Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.864010 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.929584 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-config-data-custom\") pod \"cinder-api-0\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " pod="openstack/cinder-api-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.929621 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-config-data\") pod \"cinder-api-0\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " pod="openstack/cinder-api-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.929656 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " pod="openstack/cinder-api-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.929719 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcc49\" (UniqueName: \"kubernetes.io/projected/9790bed7-d42e-42a3-8a68-6449ad28edb9-kube-api-access-vcc49\") pod \"cinder-api-0\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " pod="openstack/cinder-api-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.929761 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9790bed7-d42e-42a3-8a68-6449ad28edb9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " pod="openstack/cinder-api-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.929854 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9790bed7-d42e-42a3-8a68-6449ad28edb9-logs\") pod \"cinder-api-0\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " pod="openstack/cinder-api-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.929895 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-scripts\") pod \"cinder-api-0\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " pod="openstack/cinder-api-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.932155 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9790bed7-d42e-42a3-8a68-6449ad28edb9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " pod="openstack/cinder-api-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.932736 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9790bed7-d42e-42a3-8a68-6449ad28edb9-logs\") pod \"cinder-api-0\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " pod="openstack/cinder-api-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.935132 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-config-data-custom\") pod \"cinder-api-0\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " pod="openstack/cinder-api-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.935305 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " pod="openstack/cinder-api-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.939807 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-config-data\") pod \"cinder-api-0\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " pod="openstack/cinder-api-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.940168 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-scripts\") pod \"cinder-api-0\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " pod="openstack/cinder-api-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.946848 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcc49\" (UniqueName: \"kubernetes.io/projected/9790bed7-d42e-42a3-8a68-6449ad28edb9-kube-api-access-vcc49\") pod \"cinder-api-0\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " pod="openstack/cinder-api-0" Dec 05 10:42:16 crc kubenswrapper[4796]: I1205 10:42:16.966373 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:42:17 crc kubenswrapper[4796]: I1205 10:42:17.115305 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 10:42:17 crc kubenswrapper[4796]: I1205 10:42:17.291476 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 10:42:17 crc kubenswrapper[4796]: I1205 10:42:17.304362 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"127c1064-4744-43ab-afb3-91c03cee795d","Type":"ContainerStarted","Data":"14448af3ce5bfe3f990e5aa41546c74cb5da3680dc5b78b456c6f91fee56797b"} Dec 05 10:42:17 crc kubenswrapper[4796]: I1205 10:42:17.308143 4796 generic.go:334] "Generic (PLEG): container finished" podID="01d9c19c-9758-4d61-8a0d-53868923bfea" containerID="3964bd1b78914f6f0270dabbf0044afe0892a561a4e638576ac62bd897b61cdf" exitCode=0 Dec 05 10:42:17 crc kubenswrapper[4796]: I1205 10:42:17.308176 4796 generic.go:334] "Generic (PLEG): container finished" podID="01d9c19c-9758-4d61-8a0d-53868923bfea" containerID="0e524c79406e51179c86d02cbd0cdbabb5530937449e821cb2f21f39994ef255" exitCode=2 Dec 05 10:42:17 crc kubenswrapper[4796]: I1205 10:42:17.308186 4796 generic.go:334] "Generic (PLEG): container finished" podID="01d9c19c-9758-4d61-8a0d-53868923bfea" containerID="186ef79ea79d9794349c468ee4c95e9aaa2df944f728755ad975f9f87a71034c" exitCode=0 Dec 05 10:42:17 crc kubenswrapper[4796]: I1205 10:42:17.308211 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01d9c19c-9758-4d61-8a0d-53868923bfea","Type":"ContainerDied","Data":"3964bd1b78914f6f0270dabbf0044afe0892a561a4e638576ac62bd897b61cdf"} Dec 05 10:42:17 crc kubenswrapper[4796]: I1205 10:42:17.308250 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01d9c19c-9758-4d61-8a0d-53868923bfea","Type":"ContainerDied","Data":"0e524c79406e51179c86d02cbd0cdbabb5530937449e821cb2f21f39994ef255"} Dec 05 10:42:17 crc kubenswrapper[4796]: I1205 10:42:17.308263 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01d9c19c-9758-4d61-8a0d-53868923bfea","Type":"ContainerDied","Data":"186ef79ea79d9794349c468ee4c95e9aaa2df944f728755ad975f9f87a71034c"} Dec 05 10:42:17 crc kubenswrapper[4796]: I1205 10:42:17.461439 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c77d8b67c-xmzls"] Dec 05 10:42:17 crc kubenswrapper[4796]: W1205 10:42:17.471202 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95c9a914_32f8_4d28_8791_91c548000b4a.slice/crio-300186872dcd1042868ae9e098d5739c28505ebe392dfa8116312ad08dbc1d41 WatchSource:0}: Error finding container 300186872dcd1042868ae9e098d5739c28505ebe392dfa8116312ad08dbc1d41: Status 404 returned error can't find the container with id 300186872dcd1042868ae9e098d5739c28505ebe392dfa8116312ad08dbc1d41 Dec 05 10:42:17 crc kubenswrapper[4796]: I1205 10:42:17.588387 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 10:42:17 crc kubenswrapper[4796]: W1205 10:42:17.598305 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9790bed7_d42e_42a3_8a68_6449ad28edb9.slice/crio-4421acf3850b2127bb5925d21e7e232beee08a289eb4283aa4f5d047295ef8ff WatchSource:0}: Error finding container 4421acf3850b2127bb5925d21e7e232beee08a289eb4283aa4f5d047295ef8ff: Status 404 returned error can't find the container with id 4421acf3850b2127bb5925d21e7e232beee08a289eb4283aa4f5d047295ef8ff Dec 05 10:42:18 crc kubenswrapper[4796]: I1205 10:42:18.041635 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0022df39-ef89-40d9-9be4-5297d4bd6dc5" path="/var/lib/kubelet/pods/0022df39-ef89-40d9-9be4-5297d4bd6dc5/volumes" Dec 05 10:42:18 crc kubenswrapper[4796]: I1205 10:42:18.322460 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9790bed7-d42e-42a3-8a68-6449ad28edb9","Type":"ContainerStarted","Data":"4421acf3850b2127bb5925d21e7e232beee08a289eb4283aa4f5d047295ef8ff"} Dec 05 10:42:18 crc kubenswrapper[4796]: I1205 10:42:18.323520 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"51706a1f-8c53-4031-9c47-4aef53d51260","Type":"ContainerStarted","Data":"9141b242b08ff6c6c8bb9fe542a15fafb779f01c472224448a17250a8304d2d6"} Dec 05 10:42:18 crc kubenswrapper[4796]: I1205 10:42:18.326219 4796 generic.go:334] "Generic (PLEG): container finished" podID="95c9a914-32f8-4d28-8791-91c548000b4a" containerID="322c81b0f1b9c0b3813005e1cafcd5bcecfeb2864ad440ce554843d43b5fb545" exitCode=0 Dec 05 10:42:18 crc kubenswrapper[4796]: I1205 10:42:18.326259 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" event={"ID":"95c9a914-32f8-4d28-8791-91c548000b4a","Type":"ContainerDied","Data":"322c81b0f1b9c0b3813005e1cafcd5bcecfeb2864ad440ce554843d43b5fb545"} Dec 05 10:42:18 crc kubenswrapper[4796]: I1205 10:42:18.326275 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" event={"ID":"95c9a914-32f8-4d28-8791-91c548000b4a","Type":"ContainerStarted","Data":"300186872dcd1042868ae9e098d5739c28505ebe392dfa8116312ad08dbc1d41"} Dec 05 10:42:18 crc kubenswrapper[4796]: I1205 10:42:18.474196 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 05 10:42:19 crc kubenswrapper[4796]: I1205 10:42:19.335654 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"51706a1f-8c53-4031-9c47-4aef53d51260","Type":"ContainerStarted","Data":"789b2130d4c65c1f154f6d4cbe9b30cbbb50c3d1045f7fa59e4ca43548cd6dc1"} Dec 05 10:42:19 crc kubenswrapper[4796]: I1205 10:42:19.338695 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" event={"ID":"95c9a914-32f8-4d28-8791-91c548000b4a","Type":"ContainerStarted","Data":"0c2e96c4f25c59a111ca1bb6f02d86282c212ea152f0246806e8b8d1952e4910"} Dec 05 10:42:19 crc kubenswrapper[4796]: I1205 10:42:19.338810 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:42:19 crc kubenswrapper[4796]: I1205 10:42:19.341276 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9790bed7-d42e-42a3-8a68-6449ad28edb9","Type":"ContainerStarted","Data":"49a56900c945f46f6d934d6f26d02ba711e0a027846d8194ff755f7fd056bb08"} Dec 05 10:42:19 crc kubenswrapper[4796]: I1205 10:42:19.341303 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9790bed7-d42e-42a3-8a68-6449ad28edb9","Type":"ContainerStarted","Data":"0a95c4cce0dd05c5606750de71aa0fc467a1314604d01cf90d365e2225d5a61c"} Dec 05 10:42:19 crc kubenswrapper[4796]: I1205 10:42:19.341394 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9790bed7-d42e-42a3-8a68-6449ad28edb9" containerName="cinder-api-log" containerID="cri-o://0a95c4cce0dd05c5606750de71aa0fc467a1314604d01cf90d365e2225d5a61c" gracePeriod=30 Dec 05 10:42:19 crc kubenswrapper[4796]: I1205 10:42:19.341511 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9790bed7-d42e-42a3-8a68-6449ad28edb9" containerName="cinder-api" containerID="cri-o://49a56900c945f46f6d934d6f26d02ba711e0a027846d8194ff755f7fd056bb08" gracePeriod=30 Dec 05 10:42:19 crc kubenswrapper[4796]: I1205 10:42:19.341751 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 05 10:42:19 crc kubenswrapper[4796]: I1205 10:42:19.360298 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" podStartSLOduration=3.360284322 podStartE2EDuration="3.360284322s" podCreationTimestamp="2025-12-05 10:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:42:19.35593543 +0000 UTC m=+885.644040943" watchObservedRunningTime="2025-12-05 10:42:19.360284322 +0000 UTC m=+885.648389835" Dec 05 10:42:19 crc kubenswrapper[4796]: I1205 10:42:19.377775 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.377759751 podStartE2EDuration="3.377759751s" podCreationTimestamp="2025-12-05 10:42:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:42:19.373983074 +0000 UTC m=+885.662088587" watchObservedRunningTime="2025-12-05 10:42:19.377759751 +0000 UTC m=+885.665865263" Dec 05 10:42:19 crc kubenswrapper[4796]: I1205 10:42:19.887784 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 10:42:19 crc kubenswrapper[4796]: I1205 10:42:19.987497 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-config-data\") pod \"9790bed7-d42e-42a3-8a68-6449ad28edb9\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " Dec 05 10:42:19 crc kubenswrapper[4796]: I1205 10:42:19.987590 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcc49\" (UniqueName: \"kubernetes.io/projected/9790bed7-d42e-42a3-8a68-6449ad28edb9-kube-api-access-vcc49\") pod \"9790bed7-d42e-42a3-8a68-6449ad28edb9\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " Dec 05 10:42:19 crc kubenswrapper[4796]: I1205 10:42:19.987668 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-combined-ca-bundle\") pod \"9790bed7-d42e-42a3-8a68-6449ad28edb9\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " Dec 05 10:42:19 crc kubenswrapper[4796]: I1205 10:42:19.987761 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-config-data-custom\") pod \"9790bed7-d42e-42a3-8a68-6449ad28edb9\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " Dec 05 10:42:19 crc kubenswrapper[4796]: I1205 10:42:19.987839 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9790bed7-d42e-42a3-8a68-6449ad28edb9-logs\") pod \"9790bed7-d42e-42a3-8a68-6449ad28edb9\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " Dec 05 10:42:19 crc kubenswrapper[4796]: I1205 10:42:19.987862 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-scripts\") pod \"9790bed7-d42e-42a3-8a68-6449ad28edb9\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " Dec 05 10:42:19 crc kubenswrapper[4796]: I1205 10:42:19.987957 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9790bed7-d42e-42a3-8a68-6449ad28edb9-etc-machine-id\") pod \"9790bed7-d42e-42a3-8a68-6449ad28edb9\" (UID: \"9790bed7-d42e-42a3-8a68-6449ad28edb9\") " Dec 05 10:42:19 crc kubenswrapper[4796]: I1205 10:42:19.988469 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9790bed7-d42e-42a3-8a68-6449ad28edb9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9790bed7-d42e-42a3-8a68-6449ad28edb9" (UID: "9790bed7-d42e-42a3-8a68-6449ad28edb9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:42:19 crc kubenswrapper[4796]: I1205 10:42:19.988590 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9790bed7-d42e-42a3-8a68-6449ad28edb9-logs" (OuterVolumeSpecName: "logs") pod "9790bed7-d42e-42a3-8a68-6449ad28edb9" (UID: "9790bed7-d42e-42a3-8a68-6449ad28edb9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:42:19 crc kubenswrapper[4796]: I1205 10:42:19.997606 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-scripts" (OuterVolumeSpecName: "scripts") pod "9790bed7-d42e-42a3-8a68-6449ad28edb9" (UID: "9790bed7-d42e-42a3-8a68-6449ad28edb9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:19 crc kubenswrapper[4796]: I1205 10:42:19.998425 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9790bed7-d42e-42a3-8a68-6449ad28edb9" (UID: "9790bed7-d42e-42a3-8a68-6449ad28edb9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:19 crc kubenswrapper[4796]: I1205 10:42:19.999052 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9790bed7-d42e-42a3-8a68-6449ad28edb9-kube-api-access-vcc49" (OuterVolumeSpecName: "kube-api-access-vcc49") pod "9790bed7-d42e-42a3-8a68-6449ad28edb9" (UID: "9790bed7-d42e-42a3-8a68-6449ad28edb9"). InnerVolumeSpecName "kube-api-access-vcc49". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.043755 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9790bed7-d42e-42a3-8a68-6449ad28edb9" (UID: "9790bed7-d42e-42a3-8a68-6449ad28edb9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.043915 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-config-data" (OuterVolumeSpecName: "config-data") pod "9790bed7-d42e-42a3-8a68-6449ad28edb9" (UID: "9790bed7-d42e-42a3-8a68-6449ad28edb9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.091063 4796 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9790bed7-d42e-42a3-8a68-6449ad28edb9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.091092 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.091102 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcc49\" (UniqueName: \"kubernetes.io/projected/9790bed7-d42e-42a3-8a68-6449ad28edb9-kube-api-access-vcc49\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.091113 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.091122 4796 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.091131 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9790bed7-d42e-42a3-8a68-6449ad28edb9-logs\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.091262 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9790bed7-d42e-42a3-8a68-6449ad28edb9-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.352005 4796 generic.go:334] "Generic (PLEG): container finished" podID="9790bed7-d42e-42a3-8a68-6449ad28edb9" containerID="49a56900c945f46f6d934d6f26d02ba711e0a027846d8194ff755f7fd056bb08" exitCode=0 Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.352039 4796 generic.go:334] "Generic (PLEG): container finished" podID="9790bed7-d42e-42a3-8a68-6449ad28edb9" containerID="0a95c4cce0dd05c5606750de71aa0fc467a1314604d01cf90d365e2225d5a61c" exitCode=143 Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.352092 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9790bed7-d42e-42a3-8a68-6449ad28edb9","Type":"ContainerDied","Data":"49a56900c945f46f6d934d6f26d02ba711e0a027846d8194ff755f7fd056bb08"} Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.352121 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9790bed7-d42e-42a3-8a68-6449ad28edb9","Type":"ContainerDied","Data":"0a95c4cce0dd05c5606750de71aa0fc467a1314604d01cf90d365e2225d5a61c"} Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.352131 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9790bed7-d42e-42a3-8a68-6449ad28edb9","Type":"ContainerDied","Data":"4421acf3850b2127bb5925d21e7e232beee08a289eb4283aa4f5d047295ef8ff"} Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.352145 4796 scope.go:117] "RemoveContainer" containerID="49a56900c945f46f6d934d6f26d02ba711e0a027846d8194ff755f7fd056bb08" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.352174 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.358638 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"51706a1f-8c53-4031-9c47-4aef53d51260","Type":"ContainerStarted","Data":"362df8e263c28d69ab9117414f61b9c6f76f91f5d7fe4d7ff972db6ffd3ee940"} Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.378049 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.291439036 podStartE2EDuration="4.378029044s" podCreationTimestamp="2025-12-05 10:42:16 +0000 UTC" firstStartedPulling="2025-12-05 10:42:17.314399841 +0000 UTC m=+883.602505355" lastFinishedPulling="2025-12-05 10:42:18.40098985 +0000 UTC m=+884.689095363" observedRunningTime="2025-12-05 10:42:20.376391362 +0000 UTC m=+886.664496876" watchObservedRunningTime="2025-12-05 10:42:20.378029044 +0000 UTC m=+886.666134557" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.400386 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.404778 4796 scope.go:117] "RemoveContainer" containerID="0a95c4cce0dd05c5606750de71aa0fc467a1314604d01cf90d365e2225d5a61c" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.413418 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.420721 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 05 10:42:20 crc kubenswrapper[4796]: E1205 10:42:20.421142 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9790bed7-d42e-42a3-8a68-6449ad28edb9" containerName="cinder-api" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.421160 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9790bed7-d42e-42a3-8a68-6449ad28edb9" containerName="cinder-api" Dec 05 10:42:20 crc kubenswrapper[4796]: E1205 10:42:20.421178 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9790bed7-d42e-42a3-8a68-6449ad28edb9" containerName="cinder-api-log" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.421184 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9790bed7-d42e-42a3-8a68-6449ad28edb9" containerName="cinder-api-log" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.421410 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9790bed7-d42e-42a3-8a68-6449ad28edb9" containerName="cinder-api-log" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.421428 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9790bed7-d42e-42a3-8a68-6449ad28edb9" containerName="cinder-api" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.422395 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.424530 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.424735 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.424776 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.428973 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.463542 4796 scope.go:117] "RemoveContainer" containerID="49a56900c945f46f6d934d6f26d02ba711e0a027846d8194ff755f7fd056bb08" Dec 05 10:42:20 crc kubenswrapper[4796]: E1205 10:42:20.464206 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49a56900c945f46f6d934d6f26d02ba711e0a027846d8194ff755f7fd056bb08\": container with ID starting with 49a56900c945f46f6d934d6f26d02ba711e0a027846d8194ff755f7fd056bb08 not found: ID does not exist" containerID="49a56900c945f46f6d934d6f26d02ba711e0a027846d8194ff755f7fd056bb08" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.464238 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49a56900c945f46f6d934d6f26d02ba711e0a027846d8194ff755f7fd056bb08"} err="failed to get container status \"49a56900c945f46f6d934d6f26d02ba711e0a027846d8194ff755f7fd056bb08\": rpc error: code = NotFound desc = could not find container \"49a56900c945f46f6d934d6f26d02ba711e0a027846d8194ff755f7fd056bb08\": container with ID starting with 49a56900c945f46f6d934d6f26d02ba711e0a027846d8194ff755f7fd056bb08 not found: ID does not exist" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.464294 4796 scope.go:117] "RemoveContainer" containerID="0a95c4cce0dd05c5606750de71aa0fc467a1314604d01cf90d365e2225d5a61c" Dec 05 10:42:20 crc kubenswrapper[4796]: E1205 10:42:20.464628 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a95c4cce0dd05c5606750de71aa0fc467a1314604d01cf90d365e2225d5a61c\": container with ID starting with 0a95c4cce0dd05c5606750de71aa0fc467a1314604d01cf90d365e2225d5a61c not found: ID does not exist" containerID="0a95c4cce0dd05c5606750de71aa0fc467a1314604d01cf90d365e2225d5a61c" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.464649 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a95c4cce0dd05c5606750de71aa0fc467a1314604d01cf90d365e2225d5a61c"} err="failed to get container status \"0a95c4cce0dd05c5606750de71aa0fc467a1314604d01cf90d365e2225d5a61c\": rpc error: code = NotFound desc = could not find container \"0a95c4cce0dd05c5606750de71aa0fc467a1314604d01cf90d365e2225d5a61c\": container with ID starting with 0a95c4cce0dd05c5606750de71aa0fc467a1314604d01cf90d365e2225d5a61c not found: ID does not exist" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.464661 4796 scope.go:117] "RemoveContainer" containerID="49a56900c945f46f6d934d6f26d02ba711e0a027846d8194ff755f7fd056bb08" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.464895 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49a56900c945f46f6d934d6f26d02ba711e0a027846d8194ff755f7fd056bb08"} err="failed to get container status \"49a56900c945f46f6d934d6f26d02ba711e0a027846d8194ff755f7fd056bb08\": rpc error: code = NotFound desc = could not find container \"49a56900c945f46f6d934d6f26d02ba711e0a027846d8194ff755f7fd056bb08\": container with ID starting with 49a56900c945f46f6d934d6f26d02ba711e0a027846d8194ff755f7fd056bb08 not found: ID does not exist" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.464910 4796 scope.go:117] "RemoveContainer" containerID="0a95c4cce0dd05c5606750de71aa0fc467a1314604d01cf90d365e2225d5a61c" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.465166 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a95c4cce0dd05c5606750de71aa0fc467a1314604d01cf90d365e2225d5a61c"} err="failed to get container status \"0a95c4cce0dd05c5606750de71aa0fc467a1314604d01cf90d365e2225d5a61c\": rpc error: code = NotFound desc = could not find container \"0a95c4cce0dd05c5606750de71aa0fc467a1314604d01cf90d365e2225d5a61c\": container with ID starting with 0a95c4cce0dd05c5606750de71aa0fc467a1314604d01cf90d365e2225d5a61c not found: ID does not exist" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.500927 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ffca929-12a6-40b2-96ee-ff84ea1818dc-config-data-custom\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.500959 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ffca929-12a6-40b2-96ee-ff84ea1818dc-config-data\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.500995 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ffca929-12a6-40b2-96ee-ff84ea1818dc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.501068 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ffca929-12a6-40b2-96ee-ff84ea1818dc-logs\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.501121 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ffca929-12a6-40b2-96ee-ff84ea1818dc-scripts\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.501147 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ffca929-12a6-40b2-96ee-ff84ea1818dc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.501168 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ffca929-12a6-40b2-96ee-ff84ea1818dc-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.501230 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n28n\" (UniqueName: \"kubernetes.io/projected/8ffca929-12a6-40b2-96ee-ff84ea1818dc-kube-api-access-9n28n\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.501259 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ffca929-12a6-40b2-96ee-ff84ea1818dc-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.553380 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69fdddc9b6-2ckhp" podUID="fdca92fe-39ad-41e9-978b-1757290eee03" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.140:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.140:8443: connect: connection refused" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.601952 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n28n\" (UniqueName: \"kubernetes.io/projected/8ffca929-12a6-40b2-96ee-ff84ea1818dc-kube-api-access-9n28n\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.601992 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ffca929-12a6-40b2-96ee-ff84ea1818dc-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.602041 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ffca929-12a6-40b2-96ee-ff84ea1818dc-config-data-custom\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.602060 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ffca929-12a6-40b2-96ee-ff84ea1818dc-config-data\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.602093 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ffca929-12a6-40b2-96ee-ff84ea1818dc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.602135 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ffca929-12a6-40b2-96ee-ff84ea1818dc-logs\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.602165 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ffca929-12a6-40b2-96ee-ff84ea1818dc-scripts\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.602186 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ffca929-12a6-40b2-96ee-ff84ea1818dc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.602203 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ffca929-12a6-40b2-96ee-ff84ea1818dc-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.602890 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ffca929-12a6-40b2-96ee-ff84ea1818dc-logs\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.602953 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ffca929-12a6-40b2-96ee-ff84ea1818dc-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.607629 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ffca929-12a6-40b2-96ee-ff84ea1818dc-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.607894 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ffca929-12a6-40b2-96ee-ff84ea1818dc-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.609020 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ffca929-12a6-40b2-96ee-ff84ea1818dc-config-data-custom\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.614019 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ffca929-12a6-40b2-96ee-ff84ea1818dc-scripts\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.614041 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ffca929-12a6-40b2-96ee-ff84ea1818dc-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.614501 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ffca929-12a6-40b2-96ee-ff84ea1818dc-config-data\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.621036 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n28n\" (UniqueName: \"kubernetes.io/projected/8ffca929-12a6-40b2-96ee-ff84ea1818dc-kube-api-access-9n28n\") pod \"cinder-api-0\" (UID: \"8ffca929-12a6-40b2-96ee-ff84ea1818dc\") " pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.746704 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.774201 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-799657985-knzrm"] Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.778561 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.782916 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.783056 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.783170 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.787063 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-799657985-knzrm"] Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.805356 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-combined-ca-bundle\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.805407 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-internal-tls-certs\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.805452 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp7n9\" (UniqueName: \"kubernetes.io/projected/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-kube-api-access-fp7n9\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.805473 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-etc-swift\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.805507 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-config-data\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.805557 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-run-httpd\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.805579 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-log-httpd\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.805612 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-public-tls-certs\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.892094 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.892307 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3481f623-3439-4c17-ab95-7bf31e8fa3a0" containerName="glance-log" containerID="cri-o://efd6c98a037dee2fc9d53df24d6f2199eb9527a02cf9f7d0cb221a4a61e36963" gracePeriod=30 Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.892505 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3481f623-3439-4c17-ab95-7bf31e8fa3a0" containerName="glance-httpd" containerID="cri-o://53ead197645d5a519dff53b274a05f3c7fae717047ce888937387d43d99fdf00" gracePeriod=30 Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.906877 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-internal-tls-certs\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.906923 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp7n9\" (UniqueName: \"kubernetes.io/projected/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-kube-api-access-fp7n9\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.906945 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-etc-swift\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.906982 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-config-data\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.907045 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-run-httpd\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.907066 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-log-httpd\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.907102 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-public-tls-certs\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.907141 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-combined-ca-bundle\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.907649 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-run-httpd\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.907811 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-log-httpd\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.913330 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-internal-tls-certs\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.913457 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-combined-ca-bundle\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.914349 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-etc-swift\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.917175 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-config-data\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.918673 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-public-tls-certs\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:20 crc kubenswrapper[4796]: I1205 10:42:20.926273 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp7n9\" (UniqueName: \"kubernetes.io/projected/9e4aeaf3-d2d1-43ab-8594-d293d8602be5-kube-api-access-fp7n9\") pod \"swift-proxy-799657985-knzrm\" (UID: \"9e4aeaf3-d2d1-43ab-8594-d293d8602be5\") " pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.155535 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.245317 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.382198 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8ffca929-12a6-40b2-96ee-ff84ea1818dc","Type":"ContainerStarted","Data":"f6ce1d31322a40b9ddb79f3e7d4bcd65765884f15b27875345560002338831ca"} Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.386382 4796 generic.go:334] "Generic (PLEG): container finished" podID="3481f623-3439-4c17-ab95-7bf31e8fa3a0" containerID="efd6c98a037dee2fc9d53df24d6f2199eb9527a02cf9f7d0cb221a4a61e36963" exitCode=143 Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.387364 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3481f623-3439-4c17-ab95-7bf31e8fa3a0","Type":"ContainerDied","Data":"efd6c98a037dee2fc9d53df24d6f2199eb9527a02cf9f7d0cb221a4a61e36963"} Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.690157 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-799657985-knzrm"] Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.710619 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.824975 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-scripts\") pod \"01d9c19c-9758-4d61-8a0d-53868923bfea\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.825179 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01d9c19c-9758-4d61-8a0d-53868923bfea-log-httpd\") pod \"01d9c19c-9758-4d61-8a0d-53868923bfea\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.825268 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-config-data\") pod \"01d9c19c-9758-4d61-8a0d-53868923bfea\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.825341 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-sg-core-conf-yaml\") pod \"01d9c19c-9758-4d61-8a0d-53868923bfea\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.825384 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01d9c19c-9758-4d61-8a0d-53868923bfea-run-httpd\") pod \"01d9c19c-9758-4d61-8a0d-53868923bfea\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.825445 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-combined-ca-bundle\") pod \"01d9c19c-9758-4d61-8a0d-53868923bfea\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.825481 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmd85\" (UniqueName: \"kubernetes.io/projected/01d9c19c-9758-4d61-8a0d-53868923bfea-kube-api-access-cmd85\") pod \"01d9c19c-9758-4d61-8a0d-53868923bfea\" (UID: \"01d9c19c-9758-4d61-8a0d-53868923bfea\") " Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.825843 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01d9c19c-9758-4d61-8a0d-53868923bfea-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "01d9c19c-9758-4d61-8a0d-53868923bfea" (UID: "01d9c19c-9758-4d61-8a0d-53868923bfea"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.825889 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01d9c19c-9758-4d61-8a0d-53868923bfea-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "01d9c19c-9758-4d61-8a0d-53868923bfea" (UID: "01d9c19c-9758-4d61-8a0d-53868923bfea"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.826045 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01d9c19c-9758-4d61-8a0d-53868923bfea-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.829275 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01d9c19c-9758-4d61-8a0d-53868923bfea-kube-api-access-cmd85" (OuterVolumeSpecName: "kube-api-access-cmd85") pod "01d9c19c-9758-4d61-8a0d-53868923bfea" (UID: "01d9c19c-9758-4d61-8a0d-53868923bfea"). InnerVolumeSpecName "kube-api-access-cmd85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.833502 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-scripts" (OuterVolumeSpecName: "scripts") pod "01d9c19c-9758-4d61-8a0d-53868923bfea" (UID: "01d9c19c-9758-4d61-8a0d-53868923bfea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.867811 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.877037 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "01d9c19c-9758-4d61-8a0d-53868923bfea" (UID: "01d9c19c-9758-4d61-8a0d-53868923bfea"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.900209 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01d9c19c-9758-4d61-8a0d-53868923bfea" (UID: "01d9c19c-9758-4d61-8a0d-53868923bfea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.915955 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-config-data" (OuterVolumeSpecName: "config-data") pod "01d9c19c-9758-4d61-8a0d-53868923bfea" (UID: "01d9c19c-9758-4d61-8a0d-53868923bfea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.928472 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01d9c19c-9758-4d61-8a0d-53868923bfea-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.928497 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.928510 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.928520 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.928530 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmd85\" (UniqueName: \"kubernetes.io/projected/01d9c19c-9758-4d61-8a0d-53868923bfea-kube-api-access-cmd85\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:21 crc kubenswrapper[4796]: I1205 10:42:21.928541 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01d9c19c-9758-4d61-8a0d-53868923bfea-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.047259 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9790bed7-d42e-42a3-8a68-6449ad28edb9" path="/var/lib/kubelet/pods/9790bed7-d42e-42a3-8a68-6449ad28edb9/volumes" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.341959 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.342301 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0" containerName="glance-log" containerID="cri-o://f4339d37be037c60dc834a791ce9bf16e462f20b7fea0a6894e832156318fb00" gracePeriod=30 Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.342429 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0" containerName="glance-httpd" containerID="cri-o://db5df645b78e6a38648826fbac5189c0614e92d15fc2d3518c6a53652a9a10ac" gracePeriod=30 Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.405582 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8ffca929-12a6-40b2-96ee-ff84ea1818dc","Type":"ContainerStarted","Data":"cb49f06fe3e851b44a79f9894422b5c6116a33374ee10eb78291daaf709ccb96"} Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.407807 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-799657985-knzrm" event={"ID":"9e4aeaf3-d2d1-43ab-8594-d293d8602be5","Type":"ContainerStarted","Data":"a6f1f7c6be5c97d1755609f898b8ed5fcca8eafb12e2ea3901984354e819d6fe"} Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.414886 4796 generic.go:334] "Generic (PLEG): container finished" podID="01d9c19c-9758-4d61-8a0d-53868923bfea" containerID="31c2a93ccbce05c86cfaf88da621386eded154a39766c613e85e54c6d48448d6" exitCode=0 Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.415040 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.415071 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01d9c19c-9758-4d61-8a0d-53868923bfea","Type":"ContainerDied","Data":"31c2a93ccbce05c86cfaf88da621386eded154a39766c613e85e54c6d48448d6"} Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.415117 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01d9c19c-9758-4d61-8a0d-53868923bfea","Type":"ContainerDied","Data":"c0239fc9ce0f7551603c601eca797ef9632f686290f0a2bd991399c7faa1d725"} Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.415137 4796 scope.go:117] "RemoveContainer" containerID="3964bd1b78914f6f0270dabbf0044afe0892a561a4e638576ac62bd897b61cdf" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.458006 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.464776 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.478599 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:42:22 crc kubenswrapper[4796]: E1205 10:42:22.479048 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01d9c19c-9758-4d61-8a0d-53868923bfea" containerName="proxy-httpd" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.479785 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d9c19c-9758-4d61-8a0d-53868923bfea" containerName="proxy-httpd" Dec 05 10:42:22 crc kubenswrapper[4796]: E1205 10:42:22.479813 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01d9c19c-9758-4d61-8a0d-53868923bfea" containerName="ceilometer-central-agent" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.479822 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d9c19c-9758-4d61-8a0d-53868923bfea" containerName="ceilometer-central-agent" Dec 05 10:42:22 crc kubenswrapper[4796]: E1205 10:42:22.479831 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01d9c19c-9758-4d61-8a0d-53868923bfea" containerName="sg-core" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.479837 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d9c19c-9758-4d61-8a0d-53868923bfea" containerName="sg-core" Dec 05 10:42:22 crc kubenswrapper[4796]: E1205 10:42:22.479880 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01d9c19c-9758-4d61-8a0d-53868923bfea" containerName="ceilometer-notification-agent" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.479887 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="01d9c19c-9758-4d61-8a0d-53868923bfea" containerName="ceilometer-notification-agent" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.480082 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="01d9c19c-9758-4d61-8a0d-53868923bfea" containerName="ceilometer-central-agent" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.480101 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="01d9c19c-9758-4d61-8a0d-53868923bfea" containerName="ceilometer-notification-agent" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.480117 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="01d9c19c-9758-4d61-8a0d-53868923bfea" containerName="sg-core" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.480132 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="01d9c19c-9758-4d61-8a0d-53868923bfea" containerName="proxy-httpd" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.482092 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.486773 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.486962 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.488069 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.644833 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-scripts\") pod \"ceilometer-0\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " pod="openstack/ceilometer-0" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.644892 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-log-httpd\") pod \"ceilometer-0\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " pod="openstack/ceilometer-0" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.645192 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhcwq\" (UniqueName: \"kubernetes.io/projected/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-kube-api-access-zhcwq\") pod \"ceilometer-0\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " pod="openstack/ceilometer-0" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.645241 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " pod="openstack/ceilometer-0" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.645303 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-run-httpd\") pod \"ceilometer-0\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " pod="openstack/ceilometer-0" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.645438 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-config-data\") pod \"ceilometer-0\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " pod="openstack/ceilometer-0" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.645508 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " pod="openstack/ceilometer-0" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.747983 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhcwq\" (UniqueName: \"kubernetes.io/projected/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-kube-api-access-zhcwq\") pod \"ceilometer-0\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " pod="openstack/ceilometer-0" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.748031 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " pod="openstack/ceilometer-0" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.748057 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-run-httpd\") pod \"ceilometer-0\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " pod="openstack/ceilometer-0" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.748106 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-config-data\") pod \"ceilometer-0\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " pod="openstack/ceilometer-0" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.748141 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " pod="openstack/ceilometer-0" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.748193 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-scripts\") pod \"ceilometer-0\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " pod="openstack/ceilometer-0" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.748217 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-log-httpd\") pod \"ceilometer-0\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " pod="openstack/ceilometer-0" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.748735 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-log-httpd\") pod \"ceilometer-0\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " pod="openstack/ceilometer-0" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.748993 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-run-httpd\") pod \"ceilometer-0\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " pod="openstack/ceilometer-0" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.755920 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " pod="openstack/ceilometer-0" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.756212 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-scripts\") pod \"ceilometer-0\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " pod="openstack/ceilometer-0" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.756279 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-config-data\") pod \"ceilometer-0\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " pod="openstack/ceilometer-0" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.762380 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhcwq\" (UniqueName: \"kubernetes.io/projected/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-kube-api-access-zhcwq\") pod \"ceilometer-0\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " pod="openstack/ceilometer-0" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.770313 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " pod="openstack/ceilometer-0" Dec 05 10:42:22 crc kubenswrapper[4796]: I1205 10:42:22.803674 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.150408 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-ggsqf"] Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.152065 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ggsqf" Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.163496 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ggsqf"] Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.254335 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-pzwzv"] Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.255670 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pzwzv" Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.259279 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxqph\" (UniqueName: \"kubernetes.io/projected/b275fa6a-f8a0-4456-9ce0-d21a9cfe9f25-kube-api-access-mxqph\") pod \"nova-api-db-create-ggsqf\" (UID: \"b275fa6a-f8a0-4456-9ce0-d21a9cfe9f25\") " pod="openstack/nova-api-db-create-ggsqf" Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.278443 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-pzwzv"] Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.343753 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.359123 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-7pd69"] Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.360280 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7pd69" Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.361595 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2474b\" (UniqueName: \"kubernetes.io/projected/15729ebc-4805-4ccd-a70b-ed95183246be-kube-api-access-2474b\") pod \"nova-cell1-db-create-7pd69\" (UID: \"15729ebc-4805-4ccd-a70b-ed95183246be\") " pod="openstack/nova-cell1-db-create-7pd69" Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.361655 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxqph\" (UniqueName: \"kubernetes.io/projected/b275fa6a-f8a0-4456-9ce0-d21a9cfe9f25-kube-api-access-mxqph\") pod \"nova-api-db-create-ggsqf\" (UID: \"b275fa6a-f8a0-4456-9ce0-d21a9cfe9f25\") " pod="openstack/nova-api-db-create-ggsqf" Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.361755 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptd9x\" (UniqueName: \"kubernetes.io/projected/1a11a6bb-3600-4222-b30e-d78931484d32-kube-api-access-ptd9x\") pod \"nova-cell0-db-create-pzwzv\" (UID: \"1a11a6bb-3600-4222-b30e-d78931484d32\") " pod="openstack/nova-cell0-db-create-pzwzv" Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.365892 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7pd69"] Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.377863 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxqph\" (UniqueName: \"kubernetes.io/projected/b275fa6a-f8a0-4456-9ce0-d21a9cfe9f25-kube-api-access-mxqph\") pod \"nova-api-db-create-ggsqf\" (UID: \"b275fa6a-f8a0-4456-9ce0-d21a9cfe9f25\") " pod="openstack/nova-api-db-create-ggsqf" Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.433914 4796 generic.go:334] "Generic (PLEG): container finished" podID="cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0" containerID="f4339d37be037c60dc834a791ce9bf16e462f20b7fea0a6894e832156318fb00" exitCode=143 Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.433956 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0","Type":"ContainerDied","Data":"f4339d37be037c60dc834a791ce9bf16e462f20b7fea0a6894e832156318fb00"} Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.463961 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptd9x\" (UniqueName: \"kubernetes.io/projected/1a11a6bb-3600-4222-b30e-d78931484d32-kube-api-access-ptd9x\") pod \"nova-cell0-db-create-pzwzv\" (UID: \"1a11a6bb-3600-4222-b30e-d78931484d32\") " pod="openstack/nova-cell0-db-create-pzwzv" Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.464105 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2474b\" (UniqueName: \"kubernetes.io/projected/15729ebc-4805-4ccd-a70b-ed95183246be-kube-api-access-2474b\") pod \"nova-cell1-db-create-7pd69\" (UID: \"15729ebc-4805-4ccd-a70b-ed95183246be\") " pod="openstack/nova-cell1-db-create-7pd69" Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.470433 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ggsqf" Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.478561 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptd9x\" (UniqueName: \"kubernetes.io/projected/1a11a6bb-3600-4222-b30e-d78931484d32-kube-api-access-ptd9x\") pod \"nova-cell0-db-create-pzwzv\" (UID: \"1a11a6bb-3600-4222-b30e-d78931484d32\") " pod="openstack/nova-cell0-db-create-pzwzv" Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.478823 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2474b\" (UniqueName: \"kubernetes.io/projected/15729ebc-4805-4ccd-a70b-ed95183246be-kube-api-access-2474b\") pod \"nova-cell1-db-create-7pd69\" (UID: \"15729ebc-4805-4ccd-a70b-ed95183246be\") " pod="openstack/nova-cell1-db-create-7pd69" Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.583593 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pzwzv" Dec 05 10:42:23 crc kubenswrapper[4796]: I1205 10:42:23.709108 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7pd69" Dec 05 10:42:24 crc kubenswrapper[4796]: I1205 10:42:24.043542 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01d9c19c-9758-4d61-8a0d-53868923bfea" path="/var/lib/kubelet/pods/01d9c19c-9758-4d61-8a0d-53868923bfea/volumes" Dec 05 10:42:24 crc kubenswrapper[4796]: I1205 10:42:24.445490 4796 generic.go:334] "Generic (PLEG): container finished" podID="3481f623-3439-4c17-ab95-7bf31e8fa3a0" containerID="53ead197645d5a519dff53b274a05f3c7fae717047ce888937387d43d99fdf00" exitCode=0 Dec 05 10:42:24 crc kubenswrapper[4796]: I1205 10:42:24.445658 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3481f623-3439-4c17-ab95-7bf31e8fa3a0","Type":"ContainerDied","Data":"53ead197645d5a519dff53b274a05f3c7fae717047ce888937387d43d99fdf00"} Dec 05 10:42:25 crc kubenswrapper[4796]: I1205 10:42:25.863523 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pkrsm" Dec 05 10:42:25 crc kubenswrapper[4796]: I1205 10:42:25.910030 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pkrsm" Dec 05 10:42:26 crc kubenswrapper[4796]: I1205 10:42:26.097408 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pkrsm"] Dec 05 10:42:26 crc kubenswrapper[4796]: I1205 10:42:26.464638 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0","Type":"ContainerDied","Data":"db5df645b78e6a38648826fbac5189c0614e92d15fc2d3518c6a53652a9a10ac"} Dec 05 10:42:26 crc kubenswrapper[4796]: I1205 10:42:26.464586 4796 generic.go:334] "Generic (PLEG): container finished" podID="cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0" containerID="db5df645b78e6a38648826fbac5189c0614e92d15fc2d3518c6a53652a9a10ac" exitCode=0 Dec 05 10:42:26 crc kubenswrapper[4796]: I1205 10:42:26.562069 4796 scope.go:117] "RemoveContainer" containerID="0e524c79406e51179c86d02cbd0cdbabb5530937449e821cb2f21f39994ef255" Dec 05 10:42:26 crc kubenswrapper[4796]: I1205 10:42:26.841418 4796 scope.go:117] "RemoveContainer" containerID="31c2a93ccbce05c86cfaf88da621386eded154a39766c613e85e54c6d48448d6" Dec 05 10:42:26 crc kubenswrapper[4796]: I1205 10:42:26.904236 4796 scope.go:117] "RemoveContainer" containerID="186ef79ea79d9794349c468ee4c95e9aaa2df944f728755ad975f9f87a71034c" Dec 05 10:42:26 crc kubenswrapper[4796]: I1205 10:42:26.971829 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.000801 4796 scope.go:117] "RemoveContainer" containerID="3964bd1b78914f6f0270dabbf0044afe0892a561a4e638576ac62bd897b61cdf" Dec 05 10:42:27 crc kubenswrapper[4796]: E1205 10:42:27.013103 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3964bd1b78914f6f0270dabbf0044afe0892a561a4e638576ac62bd897b61cdf\": container with ID starting with 3964bd1b78914f6f0270dabbf0044afe0892a561a4e638576ac62bd897b61cdf not found: ID does not exist" containerID="3964bd1b78914f6f0270dabbf0044afe0892a561a4e638576ac62bd897b61cdf" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.013429 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3964bd1b78914f6f0270dabbf0044afe0892a561a4e638576ac62bd897b61cdf"} err="failed to get container status \"3964bd1b78914f6f0270dabbf0044afe0892a561a4e638576ac62bd897b61cdf\": rpc error: code = NotFound desc = could not find container \"3964bd1b78914f6f0270dabbf0044afe0892a561a4e638576ac62bd897b61cdf\": container with ID starting with 3964bd1b78914f6f0270dabbf0044afe0892a561a4e638576ac62bd897b61cdf not found: ID does not exist" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.013505 4796 scope.go:117] "RemoveContainer" containerID="0e524c79406e51179c86d02cbd0cdbabb5530937449e821cb2f21f39994ef255" Dec 05 10:42:27 crc kubenswrapper[4796]: E1205 10:42:27.017817 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e524c79406e51179c86d02cbd0cdbabb5530937449e821cb2f21f39994ef255\": container with ID starting with 0e524c79406e51179c86d02cbd0cdbabb5530937449e821cb2f21f39994ef255 not found: ID does not exist" containerID="0e524c79406e51179c86d02cbd0cdbabb5530937449e821cb2f21f39994ef255" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.017914 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e524c79406e51179c86d02cbd0cdbabb5530937449e821cb2f21f39994ef255"} err="failed to get container status \"0e524c79406e51179c86d02cbd0cdbabb5530937449e821cb2f21f39994ef255\": rpc error: code = NotFound desc = could not find container \"0e524c79406e51179c86d02cbd0cdbabb5530937449e821cb2f21f39994ef255\": container with ID starting with 0e524c79406e51179c86d02cbd0cdbabb5530937449e821cb2f21f39994ef255 not found: ID does not exist" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.018034 4796 scope.go:117] "RemoveContainer" containerID="31c2a93ccbce05c86cfaf88da621386eded154a39766c613e85e54c6d48448d6" Dec 05 10:42:27 crc kubenswrapper[4796]: E1205 10:42:27.022293 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31c2a93ccbce05c86cfaf88da621386eded154a39766c613e85e54c6d48448d6\": container with ID starting with 31c2a93ccbce05c86cfaf88da621386eded154a39766c613e85e54c6d48448d6 not found: ID does not exist" containerID="31c2a93ccbce05c86cfaf88da621386eded154a39766c613e85e54c6d48448d6" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.022329 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31c2a93ccbce05c86cfaf88da621386eded154a39766c613e85e54c6d48448d6"} err="failed to get container status \"31c2a93ccbce05c86cfaf88da621386eded154a39766c613e85e54c6d48448d6\": rpc error: code = NotFound desc = could not find container \"31c2a93ccbce05c86cfaf88da621386eded154a39766c613e85e54c6d48448d6\": container with ID starting with 31c2a93ccbce05c86cfaf88da621386eded154a39766c613e85e54c6d48448d6 not found: ID does not exist" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.022353 4796 scope.go:117] "RemoveContainer" containerID="186ef79ea79d9794349c468ee4c95e9aaa2df944f728755ad975f9f87a71034c" Dec 05 10:42:27 crc kubenswrapper[4796]: E1205 10:42:27.023504 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"186ef79ea79d9794349c468ee4c95e9aaa2df944f728755ad975f9f87a71034c\": container with ID starting with 186ef79ea79d9794349c468ee4c95e9aaa2df944f728755ad975f9f87a71034c not found: ID does not exist" containerID="186ef79ea79d9794349c468ee4c95e9aaa2df944f728755ad975f9f87a71034c" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.023578 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"186ef79ea79d9794349c468ee4c95e9aaa2df944f728755ad975f9f87a71034c"} err="failed to get container status \"186ef79ea79d9794349c468ee4c95e9aaa2df944f728755ad975f9f87a71034c\": rpc error: code = NotFound desc = could not find container \"186ef79ea79d9794349c468ee4c95e9aaa2df944f728755ad975f9f87a71034c\": container with ID starting with 186ef79ea79d9794349c468ee4c95e9aaa2df944f728755ad975f9f87a71034c not found: ID does not exist" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.037578 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65dd957765-k8dtb"] Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.038006 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65dd957765-k8dtb" podUID="48ef35ce-c0f4-47d7-b025-786c933f29f9" containerName="dnsmasq-dns" containerID="cri-o://827fc1b8dbc4d9127c757c3c2c09072a889a65b967d2563ef4c713cc89262a7c" gracePeriod=10 Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.101655 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.118884 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.136929 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.251922 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-scripts\") pod \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.252117 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-combined-ca-bundle\") pod \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.252291 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.252398 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-httpd-run\") pod \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.252418 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-config-data\") pod \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.252527 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-internal-tls-certs\") pod \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.252553 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-logs\") pod \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.252591 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrwjq\" (UniqueName: \"kubernetes.io/projected/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-kube-api-access-mrwjq\") pod \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\" (UID: \"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0\") " Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.254463 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0" (UID: "cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.256300 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-logs" (OuterVolumeSpecName: "logs") pod "cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0" (UID: "cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.259365 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-scripts" (OuterVolumeSpecName: "scripts") pod "cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0" (UID: "cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.262958 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-kube-api-access-mrwjq" (OuterVolumeSpecName: "kube-api-access-mrwjq") pod "cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0" (UID: "cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0"). InnerVolumeSpecName "kube-api-access-mrwjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.280843 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0" (UID: "cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.292544 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0" (UID: "cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.296495 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-ggsqf"] Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.319658 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0" (UID: "cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.325192 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-config-data" (OuterVolumeSpecName: "config-data") pod "cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0" (UID: "cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.355104 4796 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.355394 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-logs\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.355405 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrwjq\" (UniqueName: \"kubernetes.io/projected/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-kube-api-access-mrwjq\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.355415 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.355423 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.355452 4796 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.355461 4796 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.355472 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.377094 4796 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.466513 4796 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.483541 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-pzwzv"] Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.488989 4796 generic.go:334] "Generic (PLEG): container finished" podID="48ef35ce-c0f4-47d7-b025-786c933f29f9" containerID="827fc1b8dbc4d9127c757c3c2c09072a889a65b967d2563ef4c713cc89262a7c" exitCode=0 Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.489040 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65dd957765-k8dtb" event={"ID":"48ef35ce-c0f4-47d7-b025-786c933f29f9","Type":"ContainerDied","Data":"827fc1b8dbc4d9127c757c3c2c09072a889a65b967d2563ef4c713cc89262a7c"} Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.496780 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.496827 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0","Type":"ContainerDied","Data":"34d67ffb28f99a71a90f3c7a975e1f0870e9f1f0a35f292eb161abbf8b782cfa"} Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.497101 4796 scope.go:117] "RemoveContainer" containerID="db5df645b78e6a38648826fbac5189c0614e92d15fc2d3518c6a53652a9a10ac" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.504963 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7pd69"] Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.520871 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-799657985-knzrm" event={"ID":"9e4aeaf3-d2d1-43ab-8594-d293d8602be5","Type":"ContainerStarted","Data":"759f644177ac3efe8418002d46b79986e350ba374ca905cb6ca63d9d8674f00f"} Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.520909 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-799657985-knzrm" event={"ID":"9e4aeaf3-d2d1-43ab-8594-d293d8602be5","Type":"ContainerStarted","Data":"4d74e1ad7e945acfb3d64892b2db7ee99d33ca2c91af0f2e187e9648da80569a"} Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.520946 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.520965 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.532007 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.537925 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ggsqf" event={"ID":"b275fa6a-f8a0-4456-9ce0-d21a9cfe9f25","Type":"ContainerStarted","Data":"56caedf7264555a00e43801d6fe4f14e337dd912995abe5c0b847b59f6955d5d"} Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.544979 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"127c1064-4744-43ab-afb3-91c03cee795d","Type":"ContainerStarted","Data":"267d49a8cb6aeef0c4f48099c85ee065e0fd36c647a805438b0496d14709f317"} Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.548568 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="51706a1f-8c53-4031-9c47-4aef53d51260" containerName="cinder-scheduler" containerID="cri-o://789b2130d4c65c1f154f6d4cbe9b30cbbb50c3d1045f7fa59e4ca43548cd6dc1" gracePeriod=30 Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.549078 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="51706a1f-8c53-4031-9c47-4aef53d51260" containerName="probe" containerID="cri-o://362df8e263c28d69ab9117414f61b9c6f76f91f5d7fe4d7ff972db6ffd3ee940" gracePeriod=30 Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.549990 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-799657985-knzrm" podStartSLOduration=7.549976073 podStartE2EDuration="7.549976073s" podCreationTimestamp="2025-12-05 10:42:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:42:27.536309199 +0000 UTC m=+893.824414713" watchObservedRunningTime="2025-12-05 10:42:27.549976073 +0000 UTC m=+893.838081586" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.550957 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pkrsm" podUID="02d1fa9e-96e1-44c5-89dd-e2c619890cee" containerName="registry-server" containerID="cri-o://27f365494cbefc6f66de48e7357f7d3cbf903d85117b37ffaaa0daf8e8ef7250" gracePeriod=2 Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.572823 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.357895534 podStartE2EDuration="12.572808139s" podCreationTimestamp="2025-12-05 10:42:15 +0000 UTC" firstStartedPulling="2025-12-05 10:42:16.487152344 +0000 UTC m=+882.775257857" lastFinishedPulling="2025-12-05 10:42:26.702064949 +0000 UTC m=+892.990170462" observedRunningTime="2025-12-05 10:42:27.556643618 +0000 UTC m=+893.844749131" watchObservedRunningTime="2025-12-05 10:42:27.572808139 +0000 UTC m=+893.860913653" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.613664 4796 scope.go:117] "RemoveContainer" containerID="f4339d37be037c60dc834a791ce9bf16e462f20b7fea0a6894e832156318fb00" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.884520 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.888356 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.892381 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.910843 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 10:42:27 crc kubenswrapper[4796]: E1205 10:42:27.911137 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0" containerName="glance-log" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.911148 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0" containerName="glance-log" Dec 05 10:42:27 crc kubenswrapper[4796]: E1205 10:42:27.911159 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0" containerName="glance-httpd" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.911166 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0" containerName="glance-httpd" Dec 05 10:42:27 crc kubenswrapper[4796]: E1205 10:42:27.911190 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48ef35ce-c0f4-47d7-b025-786c933f29f9" containerName="init" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.911196 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="48ef35ce-c0f4-47d7-b025-786c933f29f9" containerName="init" Dec 05 10:42:27 crc kubenswrapper[4796]: E1205 10:42:27.911210 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48ef35ce-c0f4-47d7-b025-786c933f29f9" containerName="dnsmasq-dns" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.911215 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="48ef35ce-c0f4-47d7-b025-786c933f29f9" containerName="dnsmasq-dns" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.911381 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="48ef35ce-c0f4-47d7-b025-786c933f29f9" containerName="dnsmasq-dns" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.911391 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0" containerName="glance-log" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.911406 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0" containerName="glance-httpd" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.912192 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.918054 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.918701 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.926462 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 10:42:27 crc kubenswrapper[4796]: I1205 10:42:27.991138 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.049633 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0" path="/var/lib/kubelet/pods/cbade96d-dbab-4c6c-b0bc-0b0dc1c439f0/volumes" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.062103 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkrsm" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.077243 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-ovsdbserver-sb\") pod \"48ef35ce-c0f4-47d7-b025-786c933f29f9\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.077313 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-ovsdbserver-nb\") pod \"48ef35ce-c0f4-47d7-b025-786c933f29f9\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.077361 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-config\") pod \"48ef35ce-c0f4-47d7-b025-786c933f29f9\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.077397 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpqxr\" (UniqueName: \"kubernetes.io/projected/48ef35ce-c0f4-47d7-b025-786c933f29f9-kube-api-access-hpqxr\") pod \"48ef35ce-c0f4-47d7-b025-786c933f29f9\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.077490 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-dns-swift-storage-0\") pod \"48ef35ce-c0f4-47d7-b025-786c933f29f9\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.077515 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-dns-svc\") pod \"48ef35ce-c0f4-47d7-b025-786c933f29f9\" (UID: \"48ef35ce-c0f4-47d7-b025-786c933f29f9\") " Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.077895 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e544456-c953-47f0-b274-0fc5d07483ce-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.077919 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e544456-c953-47f0-b274-0fc5d07483ce-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.077992 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmc5z\" (UniqueName: \"kubernetes.io/projected/3e544456-c953-47f0-b274-0fc5d07483ce-kube-api-access-xmc5z\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.078011 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e544456-c953-47f0-b274-0fc5d07483ce-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.078066 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e544456-c953-47f0-b274-0fc5d07483ce-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.078122 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e544456-c953-47f0-b274-0fc5d07483ce-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.078215 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.078274 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e544456-c953-47f0-b274-0fc5d07483ce-logs\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.085350 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48ef35ce-c0f4-47d7-b025-786c933f29f9-kube-api-access-hpqxr" (OuterVolumeSpecName: "kube-api-access-hpqxr") pod "48ef35ce-c0f4-47d7-b025-786c933f29f9" (UID: "48ef35ce-c0f4-47d7-b025-786c933f29f9"). InnerVolumeSpecName "kube-api-access-hpqxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.136277 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-config" (OuterVolumeSpecName: "config") pod "48ef35ce-c0f4-47d7-b025-786c933f29f9" (UID: "48ef35ce-c0f4-47d7-b025-786c933f29f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.146560 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "48ef35ce-c0f4-47d7-b025-786c933f29f9" (UID: "48ef35ce-c0f4-47d7-b025-786c933f29f9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.170291 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "48ef35ce-c0f4-47d7-b025-786c933f29f9" (UID: "48ef35ce-c0f4-47d7-b025-786c933f29f9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.171607 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "48ef35ce-c0f4-47d7-b025-786c933f29f9" (UID: "48ef35ce-c0f4-47d7-b025-786c933f29f9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.172536 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "48ef35ce-c0f4-47d7-b025-786c933f29f9" (UID: "48ef35ce-c0f4-47d7-b025-786c933f29f9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.179943 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3481f623-3439-4c17-ab95-7bf31e8fa3a0-httpd-run\") pod \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.179996 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-combined-ca-bundle\") pod \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.180048 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3481f623-3439-4c17-ab95-7bf31e8fa3a0-logs\") pod \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.180076 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-scripts\") pod \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.180132 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlzf2\" (UniqueName: \"kubernetes.io/projected/02d1fa9e-96e1-44c5-89dd-e2c619890cee-kube-api-access-mlzf2\") pod \"02d1fa9e-96e1-44c5-89dd-e2c619890cee\" (UID: \"02d1fa9e-96e1-44c5-89dd-e2c619890cee\") " Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.180166 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d1fa9e-96e1-44c5-89dd-e2c619890cee-catalog-content\") pod \"02d1fa9e-96e1-44c5-89dd-e2c619890cee\" (UID: \"02d1fa9e-96e1-44c5-89dd-e2c619890cee\") " Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.180204 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.180249 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-public-tls-certs\") pod \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.180278 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d1fa9e-96e1-44c5-89dd-e2c619890cee-utilities\") pod \"02d1fa9e-96e1-44c5-89dd-e2c619890cee\" (UID: \"02d1fa9e-96e1-44c5-89dd-e2c619890cee\") " Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.180302 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl95b\" (UniqueName: \"kubernetes.io/projected/3481f623-3439-4c17-ab95-7bf31e8fa3a0-kube-api-access-tl95b\") pod \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.180324 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-config-data\") pod \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\" (UID: \"3481f623-3439-4c17-ab95-7bf31e8fa3a0\") " Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.180390 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3481f623-3439-4c17-ab95-7bf31e8fa3a0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3481f623-3439-4c17-ab95-7bf31e8fa3a0" (UID: "3481f623-3439-4c17-ab95-7bf31e8fa3a0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.180470 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3481f623-3439-4c17-ab95-7bf31e8fa3a0-logs" (OuterVolumeSpecName: "logs") pod "3481f623-3439-4c17-ab95-7bf31e8fa3a0" (UID: "3481f623-3439-4c17-ab95-7bf31e8fa3a0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.180647 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e544456-c953-47f0-b274-0fc5d07483ce-logs\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.180760 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e544456-c953-47f0-b274-0fc5d07483ce-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.180780 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e544456-c953-47f0-b274-0fc5d07483ce-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.180850 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmc5z\" (UniqueName: \"kubernetes.io/projected/3e544456-c953-47f0-b274-0fc5d07483ce-kube-api-access-xmc5z\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.180867 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e544456-c953-47f0-b274-0fc5d07483ce-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.180926 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e544456-c953-47f0-b274-0fc5d07483ce-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.180966 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e544456-c953-47f0-b274-0fc5d07483ce-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.181044 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.181119 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.181131 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.181396 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02d1fa9e-96e1-44c5-89dd-e2c619890cee-utilities" (OuterVolumeSpecName: "utilities") pod "02d1fa9e-96e1-44c5-89dd-e2c619890cee" (UID: "02d1fa9e-96e1-44c5-89dd-e2c619890cee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.181448 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.181489 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.182000 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e544456-c953-47f0-b274-0fc5d07483ce-logs\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.182585 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e544456-c953-47f0-b274-0fc5d07483ce-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.183020 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpqxr\" (UniqueName: \"kubernetes.io/projected/48ef35ce-c0f4-47d7-b025-786c933f29f9-kube-api-access-hpqxr\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.183043 4796 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3481f623-3439-4c17-ab95-7bf31e8fa3a0-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.183055 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3481f623-3439-4c17-ab95-7bf31e8fa3a0-logs\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.183066 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.183078 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/48ef35ce-c0f4-47d7-b025-786c933f29f9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.185964 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e544456-c953-47f0-b274-0fc5d07483ce-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.186793 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "3481f623-3439-4c17-ab95-7bf31e8fa3a0" (UID: "3481f623-3439-4c17-ab95-7bf31e8fa3a0"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.189403 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02d1fa9e-96e1-44c5-89dd-e2c619890cee-kube-api-access-mlzf2" (OuterVolumeSpecName: "kube-api-access-mlzf2") pod "02d1fa9e-96e1-44c5-89dd-e2c619890cee" (UID: "02d1fa9e-96e1-44c5-89dd-e2c619890cee"). InnerVolumeSpecName "kube-api-access-mlzf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.191245 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3481f623-3439-4c17-ab95-7bf31e8fa3a0-kube-api-access-tl95b" (OuterVolumeSpecName: "kube-api-access-tl95b") pod "3481f623-3439-4c17-ab95-7bf31e8fa3a0" (UID: "3481f623-3439-4c17-ab95-7bf31e8fa3a0"). InnerVolumeSpecName "kube-api-access-tl95b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.191472 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-scripts" (OuterVolumeSpecName: "scripts") pod "3481f623-3439-4c17-ab95-7bf31e8fa3a0" (UID: "3481f623-3439-4c17-ab95-7bf31e8fa3a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.191674 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e544456-c953-47f0-b274-0fc5d07483ce-scripts\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.196024 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e544456-c953-47f0-b274-0fc5d07483ce-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.197018 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e544456-c953-47f0-b274-0fc5d07483ce-config-data\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.203731 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmc5z\" (UniqueName: \"kubernetes.io/projected/3e544456-c953-47f0-b274-0fc5d07483ce-kube-api-access-xmc5z\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.208078 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3481f623-3439-4c17-ab95-7bf31e8fa3a0" (UID: "3481f623-3439-4c17-ab95-7bf31e8fa3a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.211459 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6f5c67c464-8hmbc" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.275587 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"3e544456-c953-47f0-b274-0fc5d07483ce\") " pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.280104 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.280200 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-config-data" (OuterVolumeSpecName: "config-data") pod "3481f623-3439-4c17-ab95-7bf31e8fa3a0" (UID: "3481f623-3439-4c17-ab95-7bf31e8fa3a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.284979 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3481f623-3439-4c17-ab95-7bf31e8fa3a0" (UID: "3481f623-3439-4c17-ab95-7bf31e8fa3a0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.285550 4796 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.285570 4796 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.285580 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02d1fa9e-96e1-44c5-89dd-e2c619890cee-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.285589 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl95b\" (UniqueName: \"kubernetes.io/projected/3481f623-3439-4c17-ab95-7bf31e8fa3a0-kube-api-access-tl95b\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.285601 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.285610 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.285618 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3481f623-3439-4c17-ab95-7bf31e8fa3a0-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.285627 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlzf2\" (UniqueName: \"kubernetes.io/projected/02d1fa9e-96e1-44c5-89dd-e2c619890cee-kube-api-access-mlzf2\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.293289 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02d1fa9e-96e1-44c5-89dd-e2c619890cee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02d1fa9e-96e1-44c5-89dd-e2c619890cee" (UID: "02d1fa9e-96e1-44c5-89dd-e2c619890cee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.304532 4796 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.388124 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02d1fa9e-96e1-44c5-89dd-e2c619890cee-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.388395 4796 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.583472 4796 generic.go:334] "Generic (PLEG): container finished" podID="51706a1f-8c53-4031-9c47-4aef53d51260" containerID="362df8e263c28d69ab9117414f61b9c6f76f91f5d7fe4d7ff972db6ffd3ee940" exitCode=0 Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.583571 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"51706a1f-8c53-4031-9c47-4aef53d51260","Type":"ContainerDied","Data":"362df8e263c28d69ab9117414f61b9c6f76f91f5d7fe4d7ff972db6ffd3ee940"} Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.596063 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8ffca929-12a6-40b2-96ee-ff84ea1818dc","Type":"ContainerStarted","Data":"50e1c9d80cc3565a120cfb546614a0ce6ba1d73601d970ae43a00f42971ebc7d"} Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.598463 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.603409 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8","Type":"ContainerStarted","Data":"f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138"} Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.603453 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8","Type":"ContainerStarted","Data":"ceb7e52d010256031a51c93ba56e40d4bd2cc4d624def98c2d44045ac9dbab9d"} Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.608936 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3481f623-3439-4c17-ab95-7bf31e8fa3a0","Type":"ContainerDied","Data":"aecba52a3639b9798b9a862ed57ce6c9b7569e894e54abd026998a4bef3fb801"} Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.608976 4796 scope.go:117] "RemoveContainer" containerID="53ead197645d5a519dff53b274a05f3c7fae717047ce888937387d43d99fdf00" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.609051 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.616201 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=8.616190035 podStartE2EDuration="8.616190035s" podCreationTimestamp="2025-12-05 10:42:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:42:28.612079722 +0000 UTC m=+894.900185235" watchObservedRunningTime="2025-12-05 10:42:28.616190035 +0000 UTC m=+894.904295548" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.631085 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65dd957765-k8dtb" event={"ID":"48ef35ce-c0f4-47d7-b025-786c933f29f9","Type":"ContainerDied","Data":"dd8ddd9d5210b5e60be2d2aa2c311f253f6f309d81d5aef487fb7f8d1849f554"} Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.631178 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65dd957765-k8dtb" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.640918 4796 generic.go:334] "Generic (PLEG): container finished" podID="15729ebc-4805-4ccd-a70b-ed95183246be" containerID="49db4fe06cf15ba77e62b78053731267566265b5f85c4ce01789e65926546d28" exitCode=0 Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.640987 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7pd69" event={"ID":"15729ebc-4805-4ccd-a70b-ed95183246be","Type":"ContainerDied","Data":"49db4fe06cf15ba77e62b78053731267566265b5f85c4ce01789e65926546d28"} Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.641022 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7pd69" event={"ID":"15729ebc-4805-4ccd-a70b-ed95183246be","Type":"ContainerStarted","Data":"060f45098d27397d7d782f8a7de2c4a134c75a5b399b7307a464e777e0c1b751"} Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.649264 4796 generic.go:334] "Generic (PLEG): container finished" podID="b275fa6a-f8a0-4456-9ce0-d21a9cfe9f25" containerID="d2f143411097229a08a3e44dae20242bf337ca92d305e9cd0a5a436ed316f343" exitCode=0 Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.649339 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ggsqf" event={"ID":"b275fa6a-f8a0-4456-9ce0-d21a9cfe9f25","Type":"ContainerDied","Data":"d2f143411097229a08a3e44dae20242bf337ca92d305e9cd0a5a436ed316f343"} Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.654399 4796 generic.go:334] "Generic (PLEG): container finished" podID="1a11a6bb-3600-4222-b30e-d78931484d32" containerID="bf00b761b856b10a4951e03da043f3a1511867016c55a38f57c935043bb69fe5" exitCode=0 Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.654445 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pzwzv" event={"ID":"1a11a6bb-3600-4222-b30e-d78931484d32","Type":"ContainerDied","Data":"bf00b761b856b10a4951e03da043f3a1511867016c55a38f57c935043bb69fe5"} Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.654460 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pzwzv" event={"ID":"1a11a6bb-3600-4222-b30e-d78931484d32","Type":"ContainerStarted","Data":"f058e7885e87f52ba5f82524eded6d6668795062b0382ccdf6824e46c1787374"} Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.682000 4796 scope.go:117] "RemoveContainer" containerID="efd6c98a037dee2fc9d53df24d6f2199eb9527a02cf9f7d0cb221a4a61e36963" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.683115 4796 generic.go:334] "Generic (PLEG): container finished" podID="02d1fa9e-96e1-44c5-89dd-e2c619890cee" containerID="27f365494cbefc6f66de48e7357f7d3cbf903d85117b37ffaaa0daf8e8ef7250" exitCode=0 Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.683273 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkrsm" event={"ID":"02d1fa9e-96e1-44c5-89dd-e2c619890cee","Type":"ContainerDied","Data":"27f365494cbefc6f66de48e7357f7d3cbf903d85117b37ffaaa0daf8e8ef7250"} Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.683353 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pkrsm" event={"ID":"02d1fa9e-96e1-44c5-89dd-e2c619890cee","Type":"ContainerDied","Data":"44d4c22aa7f7a963ff118788818db532e5f40854af6f7de105d9c2250d95e1f5"} Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.683477 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pkrsm" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.686770 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65dd957765-k8dtb"] Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.707441 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65dd957765-k8dtb"] Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.736580 4796 scope.go:117] "RemoveContainer" containerID="827fc1b8dbc4d9127c757c3c2c09072a889a65b967d2563ef4c713cc89262a7c" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.770179 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.774544 4796 scope.go:117] "RemoveContainer" containerID="76d436c60123508c75fc6c36b79ba7932596555c9e648b64215babec56330941" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.784441 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.797615 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 10:42:28 crc kubenswrapper[4796]: E1205 10:42:28.797980 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3481f623-3439-4c17-ab95-7bf31e8fa3a0" containerName="glance-httpd" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.797992 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3481f623-3439-4c17-ab95-7bf31e8fa3a0" containerName="glance-httpd" Dec 05 10:42:28 crc kubenswrapper[4796]: E1205 10:42:28.798002 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d1fa9e-96e1-44c5-89dd-e2c619890cee" containerName="extract-utilities" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.798008 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d1fa9e-96e1-44c5-89dd-e2c619890cee" containerName="extract-utilities" Dec 05 10:42:28 crc kubenswrapper[4796]: E1205 10:42:28.798042 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3481f623-3439-4c17-ab95-7bf31e8fa3a0" containerName="glance-log" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.798048 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="3481f623-3439-4c17-ab95-7bf31e8fa3a0" containerName="glance-log" Dec 05 10:42:28 crc kubenswrapper[4796]: E1205 10:42:28.798062 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d1fa9e-96e1-44c5-89dd-e2c619890cee" containerName="extract-content" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.798067 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d1fa9e-96e1-44c5-89dd-e2c619890cee" containerName="extract-content" Dec 05 10:42:28 crc kubenswrapper[4796]: E1205 10:42:28.798079 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d1fa9e-96e1-44c5-89dd-e2c619890cee" containerName="registry-server" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.798085 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d1fa9e-96e1-44c5-89dd-e2c619890cee" containerName="registry-server" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.798247 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3481f623-3439-4c17-ab95-7bf31e8fa3a0" containerName="glance-log" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.798270 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="02d1fa9e-96e1-44c5-89dd-e2c619890cee" containerName="registry-server" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.798281 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="3481f623-3439-4c17-ab95-7bf31e8fa3a0" containerName="glance-httpd" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.802241 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.804232 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.804464 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.810088 4796 scope.go:117] "RemoveContainer" containerID="27f365494cbefc6f66de48e7357f7d3cbf903d85117b37ffaaa0daf8e8ef7250" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.825990 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pkrsm"] Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.841332 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pkrsm"] Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.849672 4796 scope.go:117] "RemoveContainer" containerID="fbd7b50b7db938066b734438baa9c727cd55bf66a79167a70cc5178bfb225320" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.850786 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.882239 4796 scope.go:117] "RemoveContainer" containerID="7a49a8a2575ab507a720e314ef5719763f8a5d44cc8ef6277741a3e650e05b59" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.902344 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.902467 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f-config-data\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.902497 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f-scripts\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.902523 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.902558 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf65k\" (UniqueName: \"kubernetes.io/projected/c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f-kube-api-access-vf65k\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.902596 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f-logs\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.902611 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.902641 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.924656 4796 scope.go:117] "RemoveContainer" containerID="27f365494cbefc6f66de48e7357f7d3cbf903d85117b37ffaaa0daf8e8ef7250" Dec 05 10:42:28 crc kubenswrapper[4796]: E1205 10:42:28.925600 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27f365494cbefc6f66de48e7357f7d3cbf903d85117b37ffaaa0daf8e8ef7250\": container with ID starting with 27f365494cbefc6f66de48e7357f7d3cbf903d85117b37ffaaa0daf8e8ef7250 not found: ID does not exist" containerID="27f365494cbefc6f66de48e7357f7d3cbf903d85117b37ffaaa0daf8e8ef7250" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.925637 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27f365494cbefc6f66de48e7357f7d3cbf903d85117b37ffaaa0daf8e8ef7250"} err="failed to get container status \"27f365494cbefc6f66de48e7357f7d3cbf903d85117b37ffaaa0daf8e8ef7250\": rpc error: code = NotFound desc = could not find container \"27f365494cbefc6f66de48e7357f7d3cbf903d85117b37ffaaa0daf8e8ef7250\": container with ID starting with 27f365494cbefc6f66de48e7357f7d3cbf903d85117b37ffaaa0daf8e8ef7250 not found: ID does not exist" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.925667 4796 scope.go:117] "RemoveContainer" containerID="fbd7b50b7db938066b734438baa9c727cd55bf66a79167a70cc5178bfb225320" Dec 05 10:42:28 crc kubenswrapper[4796]: E1205 10:42:28.927281 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbd7b50b7db938066b734438baa9c727cd55bf66a79167a70cc5178bfb225320\": container with ID starting with fbd7b50b7db938066b734438baa9c727cd55bf66a79167a70cc5178bfb225320 not found: ID does not exist" containerID="fbd7b50b7db938066b734438baa9c727cd55bf66a79167a70cc5178bfb225320" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.927323 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbd7b50b7db938066b734438baa9c727cd55bf66a79167a70cc5178bfb225320"} err="failed to get container status \"fbd7b50b7db938066b734438baa9c727cd55bf66a79167a70cc5178bfb225320\": rpc error: code = NotFound desc = could not find container \"fbd7b50b7db938066b734438baa9c727cd55bf66a79167a70cc5178bfb225320\": container with ID starting with fbd7b50b7db938066b734438baa9c727cd55bf66a79167a70cc5178bfb225320 not found: ID does not exist" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.927350 4796 scope.go:117] "RemoveContainer" containerID="7a49a8a2575ab507a720e314ef5719763f8a5d44cc8ef6277741a3e650e05b59" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.927523 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 05 10:42:28 crc kubenswrapper[4796]: E1205 10:42:28.927625 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a49a8a2575ab507a720e314ef5719763f8a5d44cc8ef6277741a3e650e05b59\": container with ID starting with 7a49a8a2575ab507a720e314ef5719763f8a5d44cc8ef6277741a3e650e05b59 not found: ID does not exist" containerID="7a49a8a2575ab507a720e314ef5719763f8a5d44cc8ef6277741a3e650e05b59" Dec 05 10:42:28 crc kubenswrapper[4796]: I1205 10:42:28.927666 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a49a8a2575ab507a720e314ef5719763f8a5d44cc8ef6277741a3e650e05b59"} err="failed to get container status \"7a49a8a2575ab507a720e314ef5719763f8a5d44cc8ef6277741a3e650e05b59\": rpc error: code = NotFound desc = could not find container \"7a49a8a2575ab507a720e314ef5719763f8a5d44cc8ef6277741a3e650e05b59\": container with ID starting with 7a49a8a2575ab507a720e314ef5719763f8a5d44cc8ef6277741a3e650e05b59 not found: ID does not exist" Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.004270 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f-config-data\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.004531 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f-scripts\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.004604 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.005007 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.005164 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf65k\" (UniqueName: \"kubernetes.io/projected/c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f-kube-api-access-vf65k\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.005266 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f-logs\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.005292 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.005330 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.005406 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.006319 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f-logs\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.006807 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.008625 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f-scripts\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.011381 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.011382 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.012716 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f-config-data\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.019697 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf65k\" (UniqueName: \"kubernetes.io/projected/c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f-kube-api-access-vf65k\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.049827 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f\") " pod="openstack/glance-default-external-api-0" Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.147241 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.655199 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.694919 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3e544456-c953-47f0-b274-0fc5d07483ce","Type":"ContainerStarted","Data":"77a22698401ca5a881c0a05f5c1c2b6317d1921e5197ef4b9e175916d30c4bfb"} Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.694951 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3e544456-c953-47f0-b274-0fc5d07483ce","Type":"ContainerStarted","Data":"e4cbab39ed7f8ec58fa6e5dd534923e5c4b06590e2b8b5a908bd89e23dcaaa86"} Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.695794 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f","Type":"ContainerStarted","Data":"752c72f8200fd335068fab5da521acb68cfedcafee4de0a1d406e93483470b33"} Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.700229 4796 generic.go:334] "Generic (PLEG): container finished" podID="51706a1f-8c53-4031-9c47-4aef53d51260" containerID="789b2130d4c65c1f154f6d4cbe9b30cbbb50c3d1045f7fa59e4ca43548cd6dc1" exitCode=0 Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.700282 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"51706a1f-8c53-4031-9c47-4aef53d51260","Type":"ContainerDied","Data":"789b2130d4c65c1f154f6d4cbe9b30cbbb50c3d1045f7fa59e4ca43548cd6dc1"} Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.704614 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8","Type":"ContainerStarted","Data":"e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84"} Dec 05 10:42:29 crc kubenswrapper[4796]: I1205 10:42:29.948994 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.027501 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-fb965878c-qncj9" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.071995 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02d1fa9e-96e1-44c5-89dd-e2c619890cee" path="/var/lib/kubelet/pods/02d1fa9e-96e1-44c5-89dd-e2c619890cee/volumes" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.072890 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3481f623-3439-4c17-ab95-7bf31e8fa3a0" path="/var/lib/kubelet/pods/3481f623-3439-4c17-ab95-7bf31e8fa3a0/volumes" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.073446 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48ef35ce-c0f4-47d7-b025-786c933f29f9" path="/var/lib/kubelet/pods/48ef35ce-c0f4-47d7-b025-786c933f29f9/volumes" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.110837 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f5c67c464-8hmbc"] Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.111062 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f5c67c464-8hmbc" podUID="0c5562c0-e374-49b7-93da-78040b742805" containerName="neutron-api" containerID="cri-o://4a3f3b07f141019e83de6953dc7badc4c42fb749307ea42afd566cdeb6b8110c" gracePeriod=30 Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.112198 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f5c67c464-8hmbc" podUID="0c5562c0-e374-49b7-93da-78040b742805" containerName="neutron-httpd" containerID="cri-o://87fcdcfc32559969336f51c57e8712b9a25d66666df0a9b8dc53381299bb8b05" gracePeriod=30 Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.137537 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29lk5\" (UniqueName: \"kubernetes.io/projected/51706a1f-8c53-4031-9c47-4aef53d51260-kube-api-access-29lk5\") pod \"51706a1f-8c53-4031-9c47-4aef53d51260\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.137788 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51706a1f-8c53-4031-9c47-4aef53d51260-etc-machine-id\") pod \"51706a1f-8c53-4031-9c47-4aef53d51260\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.137995 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-config-data-custom\") pod \"51706a1f-8c53-4031-9c47-4aef53d51260\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.138061 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-scripts\") pod \"51706a1f-8c53-4031-9c47-4aef53d51260\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.138146 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-config-data\") pod \"51706a1f-8c53-4031-9c47-4aef53d51260\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.138230 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-combined-ca-bundle\") pod \"51706a1f-8c53-4031-9c47-4aef53d51260\" (UID: \"51706a1f-8c53-4031-9c47-4aef53d51260\") " Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.139157 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51706a1f-8c53-4031-9c47-4aef53d51260-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "51706a1f-8c53-4031-9c47-4aef53d51260" (UID: "51706a1f-8c53-4031-9c47-4aef53d51260"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.145053 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51706a1f-8c53-4031-9c47-4aef53d51260-kube-api-access-29lk5" (OuterVolumeSpecName: "kube-api-access-29lk5") pod "51706a1f-8c53-4031-9c47-4aef53d51260" (UID: "51706a1f-8c53-4031-9c47-4aef53d51260"). InnerVolumeSpecName "kube-api-access-29lk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.145813 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "51706a1f-8c53-4031-9c47-4aef53d51260" (UID: "51706a1f-8c53-4031-9c47-4aef53d51260"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.148887 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-scripts" (OuterVolumeSpecName: "scripts") pod "51706a1f-8c53-4031-9c47-4aef53d51260" (UID: "51706a1f-8c53-4031-9c47-4aef53d51260"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.196167 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pzwzv" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.243333 4796 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.243355 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.243366 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29lk5\" (UniqueName: \"kubernetes.io/projected/51706a1f-8c53-4031-9c47-4aef53d51260-kube-api-access-29lk5\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.243374 4796 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/51706a1f-8c53-4031-9c47-4aef53d51260-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.253868 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51706a1f-8c53-4031-9c47-4aef53d51260" (UID: "51706a1f-8c53-4031-9c47-4aef53d51260"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.263758 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-config-data" (OuterVolumeSpecName: "config-data") pod "51706a1f-8c53-4031-9c47-4aef53d51260" (UID: "51706a1f-8c53-4031-9c47-4aef53d51260"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.273151 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ggsqf" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.281631 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7pd69" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.345401 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptd9x\" (UniqueName: \"kubernetes.io/projected/1a11a6bb-3600-4222-b30e-d78931484d32-kube-api-access-ptd9x\") pod \"1a11a6bb-3600-4222-b30e-d78931484d32\" (UID: \"1a11a6bb-3600-4222-b30e-d78931484d32\") " Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.346315 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.346338 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51706a1f-8c53-4031-9c47-4aef53d51260-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.352777 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a11a6bb-3600-4222-b30e-d78931484d32-kube-api-access-ptd9x" (OuterVolumeSpecName: "kube-api-access-ptd9x") pod "1a11a6bb-3600-4222-b30e-d78931484d32" (UID: "1a11a6bb-3600-4222-b30e-d78931484d32"). InnerVolumeSpecName "kube-api-access-ptd9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.448081 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2474b\" (UniqueName: \"kubernetes.io/projected/15729ebc-4805-4ccd-a70b-ed95183246be-kube-api-access-2474b\") pod \"15729ebc-4805-4ccd-a70b-ed95183246be\" (UID: \"15729ebc-4805-4ccd-a70b-ed95183246be\") " Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.448326 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxqph\" (UniqueName: \"kubernetes.io/projected/b275fa6a-f8a0-4456-9ce0-d21a9cfe9f25-kube-api-access-mxqph\") pod \"b275fa6a-f8a0-4456-9ce0-d21a9cfe9f25\" (UID: \"b275fa6a-f8a0-4456-9ce0-d21a9cfe9f25\") " Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.448833 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptd9x\" (UniqueName: \"kubernetes.io/projected/1a11a6bb-3600-4222-b30e-d78931484d32-kube-api-access-ptd9x\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.451989 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15729ebc-4805-4ccd-a70b-ed95183246be-kube-api-access-2474b" (OuterVolumeSpecName: "kube-api-access-2474b") pod "15729ebc-4805-4ccd-a70b-ed95183246be" (UID: "15729ebc-4805-4ccd-a70b-ed95183246be"). InnerVolumeSpecName "kube-api-access-2474b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.453545 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b275fa6a-f8a0-4456-9ce0-d21a9cfe9f25-kube-api-access-mxqph" (OuterVolumeSpecName: "kube-api-access-mxqph") pod "b275fa6a-f8a0-4456-9ce0-d21a9cfe9f25" (UID: "b275fa6a-f8a0-4456-9ce0-d21a9cfe9f25"). InnerVolumeSpecName "kube-api-access-mxqph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.550770 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxqph\" (UniqueName: \"kubernetes.io/projected/b275fa6a-f8a0-4456-9ce0-d21a9cfe9f25-kube-api-access-mxqph\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.550804 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2474b\" (UniqueName: \"kubernetes.io/projected/15729ebc-4805-4ccd-a70b-ed95183246be-kube-api-access-2474b\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.553718 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69fdddc9b6-2ckhp" podUID="fdca92fe-39ad-41e9-978b-1757290eee03" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.140:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.140:8443: connect: connection refused" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.553846 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.726876 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pzwzv" event={"ID":"1a11a6bb-3600-4222-b30e-d78931484d32","Type":"ContainerDied","Data":"f058e7885e87f52ba5f82524eded6d6668795062b0382ccdf6824e46c1787374"} Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.726893 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pzwzv" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.726991 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f058e7885e87f52ba5f82524eded6d6668795062b0382ccdf6824e46c1787374" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.729902 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"51706a1f-8c53-4031-9c47-4aef53d51260","Type":"ContainerDied","Data":"9141b242b08ff6c6c8bb9fe542a15fafb779f01c472224448a17250a8304d2d6"} Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.730015 4796 scope.go:117] "RemoveContainer" containerID="362df8e263c28d69ab9117414f61b9c6f76f91f5d7fe4d7ff972db6ffd3ee940" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.729936 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.739119 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7pd69" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.739156 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7pd69" event={"ID":"15729ebc-4805-4ccd-a70b-ed95183246be","Type":"ContainerDied","Data":"060f45098d27397d7d782f8a7de2c4a134c75a5b399b7307a464e777e0c1b751"} Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.739204 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="060f45098d27397d7d782f8a7de2c4a134c75a5b399b7307a464e777e0c1b751" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.746317 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-ggsqf" event={"ID":"b275fa6a-f8a0-4456-9ce0-d21a9cfe9f25","Type":"ContainerDied","Data":"56caedf7264555a00e43801d6fe4f14e337dd912995abe5c0b847b59f6955d5d"} Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.746355 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56caedf7264555a00e43801d6fe4f14e337dd912995abe5c0b847b59f6955d5d" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.746452 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-ggsqf" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.748100 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"3e544456-c953-47f0-b274-0fc5d07483ce","Type":"ContainerStarted","Data":"201047cc7cf8c0f1615f4a5678f9afc3f0d058ba26f9e3ce99ac91d8d41aa7ce"} Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.752353 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8","Type":"ContainerStarted","Data":"098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b"} Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.754645 4796 generic.go:334] "Generic (PLEG): container finished" podID="0c5562c0-e374-49b7-93da-78040b742805" containerID="87fcdcfc32559969336f51c57e8712b9a25d66666df0a9b8dc53381299bb8b05" exitCode=0 Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.754739 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f5c67c464-8hmbc" event={"ID":"0c5562c0-e374-49b7-93da-78040b742805","Type":"ContainerDied","Data":"87fcdcfc32559969336f51c57e8712b9a25d66666df0a9b8dc53381299bb8b05"} Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.763003 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f","Type":"ContainerStarted","Data":"735456ecdc6f4a6a623180c91a6d3a9f03c9db8a8a4d50a8d32395726c6d5714"} Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.769308 4796 scope.go:117] "RemoveContainer" containerID="789b2130d4c65c1f154f6d4cbe9b30cbbb50c3d1045f7fa59e4ca43548cd6dc1" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.778298 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.778282522 podStartE2EDuration="3.778282522s" podCreationTimestamp="2025-12-05 10:42:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:42:30.765885127 +0000 UTC m=+897.053990640" watchObservedRunningTime="2025-12-05 10:42:30.778282522 +0000 UTC m=+897.066388034" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.847042 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.854811 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.860856 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 10:42:30 crc kubenswrapper[4796]: E1205 10:42:30.861746 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51706a1f-8c53-4031-9c47-4aef53d51260" containerName="cinder-scheduler" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.861764 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="51706a1f-8c53-4031-9c47-4aef53d51260" containerName="cinder-scheduler" Dec 05 10:42:30 crc kubenswrapper[4796]: E1205 10:42:30.861781 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51706a1f-8c53-4031-9c47-4aef53d51260" containerName="probe" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.861788 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="51706a1f-8c53-4031-9c47-4aef53d51260" containerName="probe" Dec 05 10:42:30 crc kubenswrapper[4796]: E1205 10:42:30.861806 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15729ebc-4805-4ccd-a70b-ed95183246be" containerName="mariadb-database-create" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.861811 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="15729ebc-4805-4ccd-a70b-ed95183246be" containerName="mariadb-database-create" Dec 05 10:42:30 crc kubenswrapper[4796]: E1205 10:42:30.861835 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b275fa6a-f8a0-4456-9ce0-d21a9cfe9f25" containerName="mariadb-database-create" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.861842 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b275fa6a-f8a0-4456-9ce0-d21a9cfe9f25" containerName="mariadb-database-create" Dec 05 10:42:30 crc kubenswrapper[4796]: E1205 10:42:30.861859 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a11a6bb-3600-4222-b30e-d78931484d32" containerName="mariadb-database-create" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.861866 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a11a6bb-3600-4222-b30e-d78931484d32" containerName="mariadb-database-create" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.862025 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="15729ebc-4805-4ccd-a70b-ed95183246be" containerName="mariadb-database-create" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.862041 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b275fa6a-f8a0-4456-9ce0-d21a9cfe9f25" containerName="mariadb-database-create" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.862054 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="51706a1f-8c53-4031-9c47-4aef53d51260" containerName="probe" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.862069 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a11a6bb-3600-4222-b30e-d78931484d32" containerName="mariadb-database-create" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.862080 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="51706a1f-8c53-4031-9c47-4aef53d51260" containerName="cinder-scheduler" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.863152 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.864970 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.874446 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.960366 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c53203c0-53e8-4b2b-90d3-a9833bd9e7f2-config-data\") pod \"cinder-scheduler-0\" (UID: \"c53203c0-53e8-4b2b-90d3-a9833bd9e7f2\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.960434 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c53203c0-53e8-4b2b-90d3-a9833bd9e7f2-scripts\") pod \"cinder-scheduler-0\" (UID: \"c53203c0-53e8-4b2b-90d3-a9833bd9e7f2\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.960458 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c53203c0-53e8-4b2b-90d3-a9833bd9e7f2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c53203c0-53e8-4b2b-90d3-a9833bd9e7f2\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.960487 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c53203c0-53e8-4b2b-90d3-a9833bd9e7f2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c53203c0-53e8-4b2b-90d3-a9833bd9e7f2\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.960874 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c53203c0-53e8-4b2b-90d3-a9833bd9e7f2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c53203c0-53e8-4b2b-90d3-a9833bd9e7f2\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:30 crc kubenswrapper[4796]: I1205 10:42:30.961302 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9tqj\" (UniqueName: \"kubernetes.io/projected/c53203c0-53e8-4b2b-90d3-a9833bd9e7f2-kube-api-access-p9tqj\") pod \"cinder-scheduler-0\" (UID: \"c53203c0-53e8-4b2b-90d3-a9833bd9e7f2\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.062760 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9tqj\" (UniqueName: \"kubernetes.io/projected/c53203c0-53e8-4b2b-90d3-a9833bd9e7f2-kube-api-access-p9tqj\") pod \"cinder-scheduler-0\" (UID: \"c53203c0-53e8-4b2b-90d3-a9833bd9e7f2\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.063103 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c53203c0-53e8-4b2b-90d3-a9833bd9e7f2-config-data\") pod \"cinder-scheduler-0\" (UID: \"c53203c0-53e8-4b2b-90d3-a9833bd9e7f2\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.063141 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c53203c0-53e8-4b2b-90d3-a9833bd9e7f2-scripts\") pod \"cinder-scheduler-0\" (UID: \"c53203c0-53e8-4b2b-90d3-a9833bd9e7f2\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.063162 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c53203c0-53e8-4b2b-90d3-a9833bd9e7f2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c53203c0-53e8-4b2b-90d3-a9833bd9e7f2\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.063193 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c53203c0-53e8-4b2b-90d3-a9833bd9e7f2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c53203c0-53e8-4b2b-90d3-a9833bd9e7f2\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.063243 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c53203c0-53e8-4b2b-90d3-a9833bd9e7f2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c53203c0-53e8-4b2b-90d3-a9833bd9e7f2\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.063346 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c53203c0-53e8-4b2b-90d3-a9833bd9e7f2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c53203c0-53e8-4b2b-90d3-a9833bd9e7f2\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.069462 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c53203c0-53e8-4b2b-90d3-a9833bd9e7f2-scripts\") pod \"cinder-scheduler-0\" (UID: \"c53203c0-53e8-4b2b-90d3-a9833bd9e7f2\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.069940 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c53203c0-53e8-4b2b-90d3-a9833bd9e7f2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c53203c0-53e8-4b2b-90d3-a9833bd9e7f2\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.074541 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c53203c0-53e8-4b2b-90d3-a9833bd9e7f2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c53203c0-53e8-4b2b-90d3-a9833bd9e7f2\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.083120 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c53203c0-53e8-4b2b-90d3-a9833bd9e7f2-config-data\") pod \"cinder-scheduler-0\" (UID: \"c53203c0-53e8-4b2b-90d3-a9833bd9e7f2\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.084940 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9tqj\" (UniqueName: \"kubernetes.io/projected/c53203c0-53e8-4b2b-90d3-a9833bd9e7f2-kube-api-access-p9tqj\") pod \"cinder-scheduler-0\" (UID: \"c53203c0-53e8-4b2b-90d3-a9833bd9e7f2\") " pod="openstack/cinder-scheduler-0" Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.167363 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.177071 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 05 10:42:31 crc kubenswrapper[4796]: W1205 10:42:31.595971 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc53203c0_53e8_4b2b_90d3_a9833bd9e7f2.slice/crio-e37bf6e4cdb76bf2998a87273c26229c6ccb398e5f0336660b7163b2e43fe16a WatchSource:0}: Error finding container e37bf6e4cdb76bf2998a87273c26229c6ccb398e5f0336660b7163b2e43fe16a: Status 404 returned error can't find the container with id e37bf6e4cdb76bf2998a87273c26229c6ccb398e5f0336660b7163b2e43fe16a Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.596734 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.775223 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8","Type":"ContainerStarted","Data":"4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24"} Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.775401 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" containerName="ceilometer-central-agent" containerID="cri-o://f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138" gracePeriod=30 Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.776592 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.776860 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" containerName="sg-core" containerID="cri-o://098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b" gracePeriod=30 Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.776943 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" containerName="proxy-httpd" containerID="cri-o://4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24" gracePeriod=30 Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.776995 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" containerName="ceilometer-notification-agent" containerID="cri-o://e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84" gracePeriod=30 Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.779910 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f","Type":"ContainerStarted","Data":"0835d89c9f83a33a0d25591fdd0aebcc2e35bb2e6be0ed17ec979b7edf27676e"} Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.791400 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c53203c0-53e8-4b2b-90d3-a9833bd9e7f2","Type":"ContainerStarted","Data":"e37bf6e4cdb76bf2998a87273c26229c6ccb398e5f0336660b7163b2e43fe16a"} Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.838473 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.8384416630000002 podStartE2EDuration="3.838441663s" podCreationTimestamp="2025-12-05 10:42:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:42:31.82005518 +0000 UTC m=+898.108160683" watchObservedRunningTime="2025-12-05 10:42:31.838441663 +0000 UTC m=+898.126547176" Dec 05 10:42:31 crc kubenswrapper[4796]: I1205 10:42:31.840627 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.196921107 podStartE2EDuration="9.84061902s" podCreationTimestamp="2025-12-05 10:42:22 +0000 UTC" firstStartedPulling="2025-12-05 10:42:27.576647603 +0000 UTC m=+893.864753116" lastFinishedPulling="2025-12-05 10:42:31.220345516 +0000 UTC m=+897.508451029" observedRunningTime="2025-12-05 10:42:31.802082436 +0000 UTC m=+898.090187949" watchObservedRunningTime="2025-12-05 10:42:31.84061902 +0000 UTC m=+898.128724532" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.041784 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51706a1f-8c53-4031-9c47-4aef53d51260" path="/var/lib/kubelet/pods/51706a1f-8c53-4031-9c47-4aef53d51260/volumes" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.463286 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.597336 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-scripts\") pod \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.597409 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-run-httpd\") pod \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.597451 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-combined-ca-bundle\") pod \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.597639 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-sg-core-conf-yaml\") pod \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.597674 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-config-data\") pod \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.597715 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-log-httpd\") pod \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.597742 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhcwq\" (UniqueName: \"kubernetes.io/projected/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-kube-api-access-zhcwq\") pod \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\" (UID: \"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8\") " Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.608818 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-kube-api-access-zhcwq" (OuterVolumeSpecName: "kube-api-access-zhcwq") pod "534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" (UID: "534a78f4-35f6-4cd7-ab60-7919b5bbc2f8"). InnerVolumeSpecName "kube-api-access-zhcwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.609083 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" (UID: "534a78f4-35f6-4cd7-ab60-7919b5bbc2f8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.616406 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-scripts" (OuterVolumeSpecName: "scripts") pod "534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" (UID: "534a78f4-35f6-4cd7-ab60-7919b5bbc2f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.616556 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" (UID: "534a78f4-35f6-4cd7-ab60-7919b5bbc2f8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.644767 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" (UID: "534a78f4-35f6-4cd7-ab60-7919b5bbc2f8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.696370 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" (UID: "534a78f4-35f6-4cd7-ab60-7919b5bbc2f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.700692 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.700726 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.700737 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhcwq\" (UniqueName: \"kubernetes.io/projected/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-kube-api-access-zhcwq\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.700748 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.700758 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.700766 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.732730 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-config-data" (OuterVolumeSpecName: "config-data") pod "534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" (UID: "534a78f4-35f6-4cd7-ab60-7919b5bbc2f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.802023 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.807021 4796 generic.go:334] "Generic (PLEG): container finished" podID="534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" containerID="4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24" exitCode=0 Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.807046 4796 generic.go:334] "Generic (PLEG): container finished" podID="534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" containerID="098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b" exitCode=2 Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.807053 4796 generic.go:334] "Generic (PLEG): container finished" podID="534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" containerID="e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84" exitCode=0 Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.807060 4796 generic.go:334] "Generic (PLEG): container finished" podID="534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" containerID="f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138" exitCode=0 Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.807080 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.807105 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8","Type":"ContainerDied","Data":"4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24"} Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.807129 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8","Type":"ContainerDied","Data":"098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b"} Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.807142 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8","Type":"ContainerDied","Data":"e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84"} Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.807150 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8","Type":"ContainerDied","Data":"f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138"} Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.807159 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"534a78f4-35f6-4cd7-ab60-7919b5bbc2f8","Type":"ContainerDied","Data":"ceb7e52d010256031a51c93ba56e40d4bd2cc4d624def98c2d44045ac9dbab9d"} Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.807173 4796 scope.go:117] "RemoveContainer" containerID="4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.810995 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c53203c0-53e8-4b2b-90d3-a9833bd9e7f2","Type":"ContainerStarted","Data":"65dfca8ba53316825c130a66dd99df31776850ed5c18c6d511eb5449129e55cd"} Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.829765 4796 scope.go:117] "RemoveContainer" containerID="098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.831163 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.83114439 podStartE2EDuration="2.83114439s" podCreationTimestamp="2025-12-05 10:42:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:42:32.824305664 +0000 UTC m=+899.112411177" watchObservedRunningTime="2025-12-05 10:42:32.83114439 +0000 UTC m=+899.119249903" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.845919 4796 scope.go:117] "RemoveContainer" containerID="e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.852758 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.863731 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.879465 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:42:32 crc kubenswrapper[4796]: E1205 10:42:32.880227 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" containerName="ceilometer-notification-agent" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.880249 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" containerName="ceilometer-notification-agent" Dec 05 10:42:32 crc kubenswrapper[4796]: E1205 10:42:32.880275 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" containerName="ceilometer-central-agent" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.880282 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" containerName="ceilometer-central-agent" Dec 05 10:42:32 crc kubenswrapper[4796]: E1205 10:42:32.880322 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" containerName="proxy-httpd" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.880328 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" containerName="proxy-httpd" Dec 05 10:42:32 crc kubenswrapper[4796]: E1205 10:42:32.880339 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" containerName="sg-core" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.880348 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" containerName="sg-core" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.882024 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" containerName="ceilometer-notification-agent" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.882080 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" containerName="proxy-httpd" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.882106 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" containerName="sg-core" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.882124 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" containerName="ceilometer-central-agent" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.894497 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.894573 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.894674 4796 scope.go:117] "RemoveContainer" containerID="f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.899366 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.901200 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.937023 4796 scope.go:117] "RemoveContainer" containerID="4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24" Dec 05 10:42:32 crc kubenswrapper[4796]: E1205 10:42:32.938039 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24\": container with ID starting with 4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24 not found: ID does not exist" containerID="4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.938092 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24"} err="failed to get container status \"4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24\": rpc error: code = NotFound desc = could not find container \"4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24\": container with ID starting with 4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24 not found: ID does not exist" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.938128 4796 scope.go:117] "RemoveContainer" containerID="098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b" Dec 05 10:42:32 crc kubenswrapper[4796]: E1205 10:42:32.938813 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b\": container with ID starting with 098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b not found: ID does not exist" containerID="098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.938839 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b"} err="failed to get container status \"098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b\": rpc error: code = NotFound desc = could not find container \"098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b\": container with ID starting with 098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b not found: ID does not exist" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.938857 4796 scope.go:117] "RemoveContainer" containerID="e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84" Dec 05 10:42:32 crc kubenswrapper[4796]: E1205 10:42:32.941411 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84\": container with ID starting with e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84 not found: ID does not exist" containerID="e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.941448 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84"} err="failed to get container status \"e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84\": rpc error: code = NotFound desc = could not find container \"e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84\": container with ID starting with e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84 not found: ID does not exist" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.941471 4796 scope.go:117] "RemoveContainer" containerID="f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138" Dec 05 10:42:32 crc kubenswrapper[4796]: E1205 10:42:32.941832 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138\": container with ID starting with f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138 not found: ID does not exist" containerID="f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.941881 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138"} err="failed to get container status \"f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138\": rpc error: code = NotFound desc = could not find container \"f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138\": container with ID starting with f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138 not found: ID does not exist" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.941908 4796 scope.go:117] "RemoveContainer" containerID="4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.942540 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24"} err="failed to get container status \"4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24\": rpc error: code = NotFound desc = could not find container \"4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24\": container with ID starting with 4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24 not found: ID does not exist" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.942562 4796 scope.go:117] "RemoveContainer" containerID="098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.942786 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b"} err="failed to get container status \"098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b\": rpc error: code = NotFound desc = could not find container \"098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b\": container with ID starting with 098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b not found: ID does not exist" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.942818 4796 scope.go:117] "RemoveContainer" containerID="e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.943163 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84"} err="failed to get container status \"e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84\": rpc error: code = NotFound desc = could not find container \"e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84\": container with ID starting with e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84 not found: ID does not exist" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.943190 4796 scope.go:117] "RemoveContainer" containerID="f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.943470 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138"} err="failed to get container status \"f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138\": rpc error: code = NotFound desc = could not find container \"f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138\": container with ID starting with f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138 not found: ID does not exist" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.943489 4796 scope.go:117] "RemoveContainer" containerID="4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.943719 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24"} err="failed to get container status \"4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24\": rpc error: code = NotFound desc = could not find container \"4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24\": container with ID starting with 4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24 not found: ID does not exist" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.943737 4796 scope.go:117] "RemoveContainer" containerID="098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.943941 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b"} err="failed to get container status \"098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b\": rpc error: code = NotFound desc = could not find container \"098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b\": container with ID starting with 098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b not found: ID does not exist" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.943958 4796 scope.go:117] "RemoveContainer" containerID="e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.944316 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84"} err="failed to get container status \"e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84\": rpc error: code = NotFound desc = could not find container \"e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84\": container with ID starting with e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84 not found: ID does not exist" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.944337 4796 scope.go:117] "RemoveContainer" containerID="f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.944546 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138"} err="failed to get container status \"f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138\": rpc error: code = NotFound desc = could not find container \"f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138\": container with ID starting with f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138 not found: ID does not exist" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.944563 4796 scope.go:117] "RemoveContainer" containerID="4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.944771 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24"} err="failed to get container status \"4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24\": rpc error: code = NotFound desc = could not find container \"4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24\": container with ID starting with 4c52b88bb3d252e137db9159582d875c18300ce7dcbb5168ad0a819673c6eb24 not found: ID does not exist" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.944788 4796 scope.go:117] "RemoveContainer" containerID="098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.945030 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b"} err="failed to get container status \"098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b\": rpc error: code = NotFound desc = could not find container \"098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b\": container with ID starting with 098ec8cfa462b297d5ea4f61d044d56756a2ab9d33601e6b8d4410e6b091941b not found: ID does not exist" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.945081 4796 scope.go:117] "RemoveContainer" containerID="e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.945717 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84"} err="failed to get container status \"e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84\": rpc error: code = NotFound desc = could not find container \"e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84\": container with ID starting with e39e70a84b662410018687bb75b7d9dd1c2a4c93553495ec5f570c5d95240b84 not found: ID does not exist" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.945740 4796 scope.go:117] "RemoveContainer" containerID="f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138" Dec 05 10:42:32 crc kubenswrapper[4796]: I1205 10:42:32.946057 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138"} err="failed to get container status \"f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138\": rpc error: code = NotFound desc = could not find container \"f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138\": container with ID starting with f74973734fdb48fe1a4e82877db917095a0447874591e92e07aab36946d44138 not found: ID does not exist" Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.005541 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/914109be-f63c-4c84-87c3-a42943de51b0-run-httpd\") pod \"ceilometer-0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " pod="openstack/ceilometer-0" Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.005643 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbkzq\" (UniqueName: \"kubernetes.io/projected/914109be-f63c-4c84-87c3-a42943de51b0-kube-api-access-sbkzq\") pod \"ceilometer-0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " pod="openstack/ceilometer-0" Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.005727 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/914109be-f63c-4c84-87c3-a42943de51b0-log-httpd\") pod \"ceilometer-0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " pod="openstack/ceilometer-0" Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.005819 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " pod="openstack/ceilometer-0" Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.005849 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " pod="openstack/ceilometer-0" Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.005907 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-scripts\") pod \"ceilometer-0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " pod="openstack/ceilometer-0" Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.005981 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-config-data\") pod \"ceilometer-0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " pod="openstack/ceilometer-0" Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.106660 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " pod="openstack/ceilometer-0" Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.107181 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " pod="openstack/ceilometer-0" Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.107229 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-scripts\") pod \"ceilometer-0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " pod="openstack/ceilometer-0" Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.107289 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-config-data\") pod \"ceilometer-0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " pod="openstack/ceilometer-0" Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.107330 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/914109be-f63c-4c84-87c3-a42943de51b0-run-httpd\") pod \"ceilometer-0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " pod="openstack/ceilometer-0" Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.107392 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbkzq\" (UniqueName: \"kubernetes.io/projected/914109be-f63c-4c84-87c3-a42943de51b0-kube-api-access-sbkzq\") pod \"ceilometer-0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " pod="openstack/ceilometer-0" Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.107417 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/914109be-f63c-4c84-87c3-a42943de51b0-log-httpd\") pod \"ceilometer-0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " pod="openstack/ceilometer-0" Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.107814 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/914109be-f63c-4c84-87c3-a42943de51b0-log-httpd\") pod \"ceilometer-0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " pod="openstack/ceilometer-0" Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.107967 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/914109be-f63c-4c84-87c3-a42943de51b0-run-httpd\") pod \"ceilometer-0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " pod="openstack/ceilometer-0" Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.113181 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " pod="openstack/ceilometer-0" Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.113242 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-scripts\") pod \"ceilometer-0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " pod="openstack/ceilometer-0" Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.113446 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-config-data\") pod \"ceilometer-0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " pod="openstack/ceilometer-0" Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.114214 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " pod="openstack/ceilometer-0" Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.123296 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbkzq\" (UniqueName: \"kubernetes.io/projected/914109be-f63c-4c84-87c3-a42943de51b0-kube-api-access-sbkzq\") pod \"ceilometer-0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " pod="openstack/ceilometer-0" Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.224613 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.648823 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.820891 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c53203c0-53e8-4b2b-90d3-a9833bd9e7f2","Type":"ContainerStarted","Data":"148db872bc43a98cc9b47bab009f434ce88aef128065786b9fb219d79fbeebe0"} Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.824298 4796 generic.go:334] "Generic (PLEG): container finished" podID="0c5562c0-e374-49b7-93da-78040b742805" containerID="4a3f3b07f141019e83de6953dc7badc4c42fb749307ea42afd566cdeb6b8110c" exitCode=0 Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.824348 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f5c67c464-8hmbc" event={"ID":"0c5562c0-e374-49b7-93da-78040b742805","Type":"ContainerDied","Data":"4a3f3b07f141019e83de6953dc7badc4c42fb749307ea42afd566cdeb6b8110c"} Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.825242 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"914109be-f63c-4c84-87c3-a42943de51b0","Type":"ContainerStarted","Data":"9482e8f4b8443fb8f63ccd1a0889bdd36956fc3356bb0f467d2cc652c19b68a1"} Dec 05 10:42:33 crc kubenswrapper[4796]: I1205 10:42:33.975649 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f5c67c464-8hmbc" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.057558 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="534a78f4-35f6-4cd7-ab60-7919b5bbc2f8" path="/var/lib/kubelet/pods/534a78f4-35f6-4cd7-ab60-7919b5bbc2f8/volumes" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.123206 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-ovndb-tls-certs\") pod \"0c5562c0-e374-49b7-93da-78040b742805\" (UID: \"0c5562c0-e374-49b7-93da-78040b742805\") " Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.123588 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4c4n\" (UniqueName: \"kubernetes.io/projected/0c5562c0-e374-49b7-93da-78040b742805-kube-api-access-h4c4n\") pod \"0c5562c0-e374-49b7-93da-78040b742805\" (UID: \"0c5562c0-e374-49b7-93da-78040b742805\") " Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.123619 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-httpd-config\") pod \"0c5562c0-e374-49b7-93da-78040b742805\" (UID: \"0c5562c0-e374-49b7-93da-78040b742805\") " Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.123640 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-combined-ca-bundle\") pod \"0c5562c0-e374-49b7-93da-78040b742805\" (UID: \"0c5562c0-e374-49b7-93da-78040b742805\") " Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.123773 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-config\") pod \"0c5562c0-e374-49b7-93da-78040b742805\" (UID: \"0c5562c0-e374-49b7-93da-78040b742805\") " Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.132968 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0c5562c0-e374-49b7-93da-78040b742805" (UID: "0c5562c0-e374-49b7-93da-78040b742805"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.135139 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c5562c0-e374-49b7-93da-78040b742805-kube-api-access-h4c4n" (OuterVolumeSpecName: "kube-api-access-h4c4n") pod "0c5562c0-e374-49b7-93da-78040b742805" (UID: "0c5562c0-e374-49b7-93da-78040b742805"). InnerVolumeSpecName "kube-api-access-h4c4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.192767 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c5562c0-e374-49b7-93da-78040b742805" (UID: "0c5562c0-e374-49b7-93da-78040b742805"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.212478 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-config" (OuterVolumeSpecName: "config") pod "0c5562c0-e374-49b7-93da-78040b742805" (UID: "0c5562c0-e374-49b7-93da-78040b742805"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.217699 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0c5562c0-e374-49b7-93da-78040b742805" (UID: "0c5562c0-e374-49b7-93da-78040b742805"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.226076 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4c4n\" (UniqueName: \"kubernetes.io/projected/0c5562c0-e374-49b7-93da-78040b742805-kube-api-access-h4c4n\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.226102 4796 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.226114 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.226121 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.226130 4796 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c5562c0-e374-49b7-93da-78040b742805-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:34 crc kubenswrapper[4796]: E1205 10:42:34.312801 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdca92fe_39ad_41e9_978b_1757290eee03.slice/crio-015efb0f191edca8f32d554f7c90a5e7333f1ac4f7dc4f0419abd600758422d5.scope\": RecentStats: unable to find data in memory cache]" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.433527 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.438655 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.555447 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fdca92fe-39ad-41e9-978b-1757290eee03-scripts\") pod \"fdca92fe-39ad-41e9-978b-1757290eee03\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.555868 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdca92fe-39ad-41e9-978b-1757290eee03-logs\") pod \"fdca92fe-39ad-41e9-978b-1757290eee03\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.555950 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fdca92fe-39ad-41e9-978b-1757290eee03-config-data\") pod \"fdca92fe-39ad-41e9-978b-1757290eee03\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.556004 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6flc9\" (UniqueName: \"kubernetes.io/projected/fdca92fe-39ad-41e9-978b-1757290eee03-kube-api-access-6flc9\") pod \"fdca92fe-39ad-41e9-978b-1757290eee03\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.556180 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdca92fe-39ad-41e9-978b-1757290eee03-horizon-tls-certs\") pod \"fdca92fe-39ad-41e9-978b-1757290eee03\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.556236 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fdca92fe-39ad-41e9-978b-1757290eee03-horizon-secret-key\") pod \"fdca92fe-39ad-41e9-978b-1757290eee03\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.556385 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdca92fe-39ad-41e9-978b-1757290eee03-combined-ca-bundle\") pod \"fdca92fe-39ad-41e9-978b-1757290eee03\" (UID: \"fdca92fe-39ad-41e9-978b-1757290eee03\") " Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.556433 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdca92fe-39ad-41e9-978b-1757290eee03-logs" (OuterVolumeSpecName: "logs") pod "fdca92fe-39ad-41e9-978b-1757290eee03" (UID: "fdca92fe-39ad-41e9-978b-1757290eee03"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.556880 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdca92fe-39ad-41e9-978b-1757290eee03-logs\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.564399 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdca92fe-39ad-41e9-978b-1757290eee03-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "fdca92fe-39ad-41e9-978b-1757290eee03" (UID: "fdca92fe-39ad-41e9-978b-1757290eee03"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.564490 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdca92fe-39ad-41e9-978b-1757290eee03-kube-api-access-6flc9" (OuterVolumeSpecName: "kube-api-access-6flc9") pod "fdca92fe-39ad-41e9-978b-1757290eee03" (UID: "fdca92fe-39ad-41e9-978b-1757290eee03"). InnerVolumeSpecName "kube-api-access-6flc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.581272 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdca92fe-39ad-41e9-978b-1757290eee03-scripts" (OuterVolumeSpecName: "scripts") pod "fdca92fe-39ad-41e9-978b-1757290eee03" (UID: "fdca92fe-39ad-41e9-978b-1757290eee03"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.581425 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdca92fe-39ad-41e9-978b-1757290eee03-config-data" (OuterVolumeSpecName: "config-data") pod "fdca92fe-39ad-41e9-978b-1757290eee03" (UID: "fdca92fe-39ad-41e9-978b-1757290eee03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.584169 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdca92fe-39ad-41e9-978b-1757290eee03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdca92fe-39ad-41e9-978b-1757290eee03" (UID: "fdca92fe-39ad-41e9-978b-1757290eee03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.605887 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdca92fe-39ad-41e9-978b-1757290eee03-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "fdca92fe-39ad-41e9-978b-1757290eee03" (UID: "fdca92fe-39ad-41e9-978b-1757290eee03"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.660236 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fdca92fe-39ad-41e9-978b-1757290eee03-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.660274 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fdca92fe-39ad-41e9-978b-1757290eee03-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.660287 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6flc9\" (UniqueName: \"kubernetes.io/projected/fdca92fe-39ad-41e9-978b-1757290eee03-kube-api-access-6flc9\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.660301 4796 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdca92fe-39ad-41e9-978b-1757290eee03-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.660312 4796 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fdca92fe-39ad-41e9-978b-1757290eee03-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.660321 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdca92fe-39ad-41e9-978b-1757290eee03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.835154 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"914109be-f63c-4c84-87c3-a42943de51b0","Type":"ContainerStarted","Data":"84c8f63735962e7d5f71a788cc589b6c65a2cfe28b4f6d6922e25b881d953183"} Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.837163 4796 generic.go:334] "Generic (PLEG): container finished" podID="fdca92fe-39ad-41e9-978b-1757290eee03" containerID="015efb0f191edca8f32d554f7c90a5e7333f1ac4f7dc4f0419abd600758422d5" exitCode=137 Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.837232 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69fdddc9b6-2ckhp" event={"ID":"fdca92fe-39ad-41e9-978b-1757290eee03","Type":"ContainerDied","Data":"015efb0f191edca8f32d554f7c90a5e7333f1ac4f7dc4f0419abd600758422d5"} Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.837252 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69fdddc9b6-2ckhp" event={"ID":"fdca92fe-39ad-41e9-978b-1757290eee03","Type":"ContainerDied","Data":"fd3e0acef55e43bba925cd76104dd9c7cd665130af9566e803f5898ef89e4f5c"} Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.837283 4796 scope.go:117] "RemoveContainer" containerID="2c7054840b5c4d67e55e14bcd05d7db816598f2a9fee7b6e51d65a69305a3d62" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.837357 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69fdddc9b6-2ckhp" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.840620 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f5c67c464-8hmbc" Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.840662 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f5c67c464-8hmbc" event={"ID":"0c5562c0-e374-49b7-93da-78040b742805","Type":"ContainerDied","Data":"6f2777da5e1b5f2633b97e6981c54c380e68b1866d5cb8142bd8bc7336e98f19"} Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.882907 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f5c67c464-8hmbc"] Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.892404 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6f5c67c464-8hmbc"] Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.896787 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69fdddc9b6-2ckhp"] Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.901609 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-69fdddc9b6-2ckhp"] Dec 05 10:42:34 crc kubenswrapper[4796]: I1205 10:42:34.997364 4796 scope.go:117] "RemoveContainer" containerID="015efb0f191edca8f32d554f7c90a5e7333f1ac4f7dc4f0419abd600758422d5" Dec 05 10:42:35 crc kubenswrapper[4796]: I1205 10:42:35.026386 4796 scope.go:117] "RemoveContainer" containerID="2c7054840b5c4d67e55e14bcd05d7db816598f2a9fee7b6e51d65a69305a3d62" Dec 05 10:42:35 crc kubenswrapper[4796]: E1205 10:42:35.026916 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c7054840b5c4d67e55e14bcd05d7db816598f2a9fee7b6e51d65a69305a3d62\": container with ID starting with 2c7054840b5c4d67e55e14bcd05d7db816598f2a9fee7b6e51d65a69305a3d62 not found: ID does not exist" containerID="2c7054840b5c4d67e55e14bcd05d7db816598f2a9fee7b6e51d65a69305a3d62" Dec 05 10:42:35 crc kubenswrapper[4796]: I1205 10:42:35.026949 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c7054840b5c4d67e55e14bcd05d7db816598f2a9fee7b6e51d65a69305a3d62"} err="failed to get container status \"2c7054840b5c4d67e55e14bcd05d7db816598f2a9fee7b6e51d65a69305a3d62\": rpc error: code = NotFound desc = could not find container \"2c7054840b5c4d67e55e14bcd05d7db816598f2a9fee7b6e51d65a69305a3d62\": container with ID starting with 2c7054840b5c4d67e55e14bcd05d7db816598f2a9fee7b6e51d65a69305a3d62 not found: ID does not exist" Dec 05 10:42:35 crc kubenswrapper[4796]: I1205 10:42:35.026972 4796 scope.go:117] "RemoveContainer" containerID="015efb0f191edca8f32d554f7c90a5e7333f1ac4f7dc4f0419abd600758422d5" Dec 05 10:42:35 crc kubenswrapper[4796]: E1205 10:42:35.027318 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"015efb0f191edca8f32d554f7c90a5e7333f1ac4f7dc4f0419abd600758422d5\": container with ID starting with 015efb0f191edca8f32d554f7c90a5e7333f1ac4f7dc4f0419abd600758422d5 not found: ID does not exist" containerID="015efb0f191edca8f32d554f7c90a5e7333f1ac4f7dc4f0419abd600758422d5" Dec 05 10:42:35 crc kubenswrapper[4796]: I1205 10:42:35.027340 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"015efb0f191edca8f32d554f7c90a5e7333f1ac4f7dc4f0419abd600758422d5"} err="failed to get container status \"015efb0f191edca8f32d554f7c90a5e7333f1ac4f7dc4f0419abd600758422d5\": rpc error: code = NotFound desc = could not find container \"015efb0f191edca8f32d554f7c90a5e7333f1ac4f7dc4f0419abd600758422d5\": container with ID starting with 015efb0f191edca8f32d554f7c90a5e7333f1ac4f7dc4f0419abd600758422d5 not found: ID does not exist" Dec 05 10:42:35 crc kubenswrapper[4796]: I1205 10:42:35.027381 4796 scope.go:117] "RemoveContainer" containerID="87fcdcfc32559969336f51c57e8712b9a25d66666df0a9b8dc53381299bb8b05" Dec 05 10:42:35 crc kubenswrapper[4796]: I1205 10:42:35.050835 4796 scope.go:117] "RemoveContainer" containerID="4a3f3b07f141019e83de6953dc7badc4c42fb749307ea42afd566cdeb6b8110c" Dec 05 10:42:35 crc kubenswrapper[4796]: I1205 10:42:35.177600 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:42:35 crc kubenswrapper[4796]: I1205 10:42:35.177664 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:42:35 crc kubenswrapper[4796]: I1205 10:42:35.848465 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"914109be-f63c-4c84-87c3-a42943de51b0","Type":"ContainerStarted","Data":"9f55ccd4203bcdcb909cc60ab7ebc76dd62333824c5ec005f433aec87edb01c1"} Dec 05 10:42:36 crc kubenswrapper[4796]: I1205 10:42:36.039219 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c5562c0-e374-49b7-93da-78040b742805" path="/var/lib/kubelet/pods/0c5562c0-e374-49b7-93da-78040b742805/volumes" Dec 05 10:42:36 crc kubenswrapper[4796]: I1205 10:42:36.039958 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdca92fe-39ad-41e9-978b-1757290eee03" path="/var/lib/kubelet/pods/fdca92fe-39ad-41e9-978b-1757290eee03/volumes" Dec 05 10:42:36 crc kubenswrapper[4796]: I1205 10:42:36.163109 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-799657985-knzrm" Dec 05 10:42:36 crc kubenswrapper[4796]: I1205 10:42:36.177886 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 05 10:42:36 crc kubenswrapper[4796]: I1205 10:42:36.860041 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"914109be-f63c-4c84-87c3-a42943de51b0","Type":"ContainerStarted","Data":"6d19c94ae18b5c90e20d92e8e68e3242084d519b6e48a73d8b24134c890c791f"} Dec 05 10:42:37 crc kubenswrapper[4796]: I1205 10:42:37.429359 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 05 10:42:38 crc kubenswrapper[4796]: I1205 10:42:38.282070 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 10:42:38 crc kubenswrapper[4796]: I1205 10:42:38.282150 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 05 10:42:38 crc kubenswrapper[4796]: I1205 10:42:38.307565 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 10:42:38 crc kubenswrapper[4796]: I1205 10:42:38.315080 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 05 10:42:38 crc kubenswrapper[4796]: I1205 10:42:38.874310 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 10:42:38 crc kubenswrapper[4796]: I1205 10:42:38.874617 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 05 10:42:39 crc kubenswrapper[4796]: I1205 10:42:39.148798 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 10:42:39 crc kubenswrapper[4796]: I1205 10:42:39.148925 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 05 10:42:39 crc kubenswrapper[4796]: I1205 10:42:39.178543 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 10:42:39 crc kubenswrapper[4796]: I1205 10:42:39.188748 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 05 10:42:39 crc kubenswrapper[4796]: I1205 10:42:39.880838 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 10:42:39 crc kubenswrapper[4796]: I1205 10:42:39.881282 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 05 10:42:40 crc kubenswrapper[4796]: I1205 10:42:40.397136 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 10:42:40 crc kubenswrapper[4796]: I1205 10:42:40.403592 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 05 10:42:41 crc kubenswrapper[4796]: I1205 10:42:41.399622 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 05 10:42:41 crc kubenswrapper[4796]: I1205 10:42:41.577761 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 10:42:41 crc kubenswrapper[4796]: I1205 10:42:41.580069 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.382084 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-f7bc-account-create-mgfrs"] Dec 05 10:42:43 crc kubenswrapper[4796]: E1205 10:42:43.382905 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdca92fe-39ad-41e9-978b-1757290eee03" containerName="horizon" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.382919 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdca92fe-39ad-41e9-978b-1757290eee03" containerName="horizon" Dec 05 10:42:43 crc kubenswrapper[4796]: E1205 10:42:43.382931 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdca92fe-39ad-41e9-978b-1757290eee03" containerName="horizon-log" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.382938 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdca92fe-39ad-41e9-978b-1757290eee03" containerName="horizon-log" Dec 05 10:42:43 crc kubenswrapper[4796]: E1205 10:42:43.382946 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c5562c0-e374-49b7-93da-78040b742805" containerName="neutron-api" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.382952 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c5562c0-e374-49b7-93da-78040b742805" containerName="neutron-api" Dec 05 10:42:43 crc kubenswrapper[4796]: E1205 10:42:43.382970 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c5562c0-e374-49b7-93da-78040b742805" containerName="neutron-httpd" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.382976 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c5562c0-e374-49b7-93da-78040b742805" containerName="neutron-httpd" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.383155 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdca92fe-39ad-41e9-978b-1757290eee03" containerName="horizon" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.383172 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdca92fe-39ad-41e9-978b-1757290eee03" containerName="horizon-log" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.383182 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c5562c0-e374-49b7-93da-78040b742805" containerName="neutron-httpd" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.383200 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c5562c0-e374-49b7-93da-78040b742805" containerName="neutron-api" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.383760 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f7bc-account-create-mgfrs" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.385256 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.391036 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f7bc-account-create-mgfrs"] Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.470530 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g25s8\" (UniqueName: \"kubernetes.io/projected/7c6959f5-c89c-47a0-ac74-03e207adc303-kube-api-access-g25s8\") pod \"nova-api-f7bc-account-create-mgfrs\" (UID: \"7c6959f5-c89c-47a0-ac74-03e207adc303\") " pod="openstack/nova-api-f7bc-account-create-mgfrs" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.572084 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g25s8\" (UniqueName: \"kubernetes.io/projected/7c6959f5-c89c-47a0-ac74-03e207adc303-kube-api-access-g25s8\") pod \"nova-api-f7bc-account-create-mgfrs\" (UID: \"7c6959f5-c89c-47a0-ac74-03e207adc303\") " pod="openstack/nova-api-f7bc-account-create-mgfrs" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.575179 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-564f-account-create-pxxp8"] Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.576261 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-564f-account-create-pxxp8" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.589926 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-564f-account-create-pxxp8"] Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.591820 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.595628 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g25s8\" (UniqueName: \"kubernetes.io/projected/7c6959f5-c89c-47a0-ac74-03e207adc303-kube-api-access-g25s8\") pod \"nova-api-f7bc-account-create-mgfrs\" (UID: \"7c6959f5-c89c-47a0-ac74-03e207adc303\") " pod="openstack/nova-api-f7bc-account-create-mgfrs" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.674285 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkjdr\" (UniqueName: \"kubernetes.io/projected/d42b0029-cd05-4ffc-99cc-b6230c464e58-kube-api-access-qkjdr\") pod \"nova-cell0-564f-account-create-pxxp8\" (UID: \"d42b0029-cd05-4ffc-99cc-b6230c464e58\") " pod="openstack/nova-cell0-564f-account-create-pxxp8" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.700503 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f7bc-account-create-mgfrs" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.777224 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkjdr\" (UniqueName: \"kubernetes.io/projected/d42b0029-cd05-4ffc-99cc-b6230c464e58-kube-api-access-qkjdr\") pod \"nova-cell0-564f-account-create-pxxp8\" (UID: \"d42b0029-cd05-4ffc-99cc-b6230c464e58\") " pod="openstack/nova-cell0-564f-account-create-pxxp8" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.782651 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-4868-account-create-l5rm9"] Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.784177 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4868-account-create-l5rm9" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.790142 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.797514 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkjdr\" (UniqueName: \"kubernetes.io/projected/d42b0029-cd05-4ffc-99cc-b6230c464e58-kube-api-access-qkjdr\") pod \"nova-cell0-564f-account-create-pxxp8\" (UID: \"d42b0029-cd05-4ffc-99cc-b6230c464e58\") " pod="openstack/nova-cell0-564f-account-create-pxxp8" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.805960 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4868-account-create-l5rm9"] Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.881103 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4frp\" (UniqueName: \"kubernetes.io/projected/7cf65f7c-cddb-463c-9be0-939961c5e902-kube-api-access-d4frp\") pod \"nova-cell1-4868-account-create-l5rm9\" (UID: \"7cf65f7c-cddb-463c-9be0-939961c5e902\") " pod="openstack/nova-cell1-4868-account-create-l5rm9" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.892148 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-564f-account-create-pxxp8" Dec 05 10:42:43 crc kubenswrapper[4796]: I1205 10:42:43.984326 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4frp\" (UniqueName: \"kubernetes.io/projected/7cf65f7c-cddb-463c-9be0-939961c5e902-kube-api-access-d4frp\") pod \"nova-cell1-4868-account-create-l5rm9\" (UID: \"7cf65f7c-cddb-463c-9be0-939961c5e902\") " pod="openstack/nova-cell1-4868-account-create-l5rm9" Dec 05 10:42:44 crc kubenswrapper[4796]: I1205 10:42:44.007148 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4frp\" (UniqueName: \"kubernetes.io/projected/7cf65f7c-cddb-463c-9be0-939961c5e902-kube-api-access-d4frp\") pod \"nova-cell1-4868-account-create-l5rm9\" (UID: \"7cf65f7c-cddb-463c-9be0-939961c5e902\") " pod="openstack/nova-cell1-4868-account-create-l5rm9" Dec 05 10:42:44 crc kubenswrapper[4796]: I1205 10:42:44.127015 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4868-account-create-l5rm9" Dec 05 10:42:44 crc kubenswrapper[4796]: I1205 10:42:44.205456 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f7bc-account-create-mgfrs"] Dec 05 10:42:44 crc kubenswrapper[4796]: I1205 10:42:44.338309 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-564f-account-create-pxxp8"] Dec 05 10:42:44 crc kubenswrapper[4796]: W1205 10:42:44.344560 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd42b0029_cd05_4ffc_99cc_b6230c464e58.slice/crio-32b288c2ae9512fd1b591df6c019f0c6e1cd4d7b0aef3ba0a6273f4cb510ec75 WatchSource:0}: Error finding container 32b288c2ae9512fd1b591df6c019f0c6e1cd4d7b0aef3ba0a6273f4cb510ec75: Status 404 returned error can't find the container with id 32b288c2ae9512fd1b591df6c019f0c6e1cd4d7b0aef3ba0a6273f4cb510ec75 Dec 05 10:42:44 crc kubenswrapper[4796]: I1205 10:42:44.507818 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4868-account-create-l5rm9"] Dec 05 10:42:44 crc kubenswrapper[4796]: W1205 10:42:44.519090 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cf65f7c_cddb_463c_9be0_939961c5e902.slice/crio-9563a65121535264708ecd2a3fe843bccb7c2c0851e7abd53ef95ca7230f2c57 WatchSource:0}: Error finding container 9563a65121535264708ecd2a3fe843bccb7c2c0851e7abd53ef95ca7230f2c57: Status 404 returned error can't find the container with id 9563a65121535264708ecd2a3fe843bccb7c2c0851e7abd53ef95ca7230f2c57 Dec 05 10:42:44 crc kubenswrapper[4796]: E1205 10:42:44.593773 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c6959f5_c89c_47a0_ac74_03e207adc303.slice/crio-conmon-dce665f6bbe16b4036f2c0f4c1773649d9f17af91474ea178f951a42c0d8a1bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c6959f5_c89c_47a0_ac74_03e207adc303.slice/crio-dce665f6bbe16b4036f2c0f4c1773649d9f17af91474ea178f951a42c0d8a1bb.scope\": RecentStats: unable to find data in memory cache]" Dec 05 10:42:44 crc kubenswrapper[4796]: I1205 10:42:44.924455 4796 generic.go:334] "Generic (PLEG): container finished" podID="7c6959f5-c89c-47a0-ac74-03e207adc303" containerID="dce665f6bbe16b4036f2c0f4c1773649d9f17af91474ea178f951a42c0d8a1bb" exitCode=0 Dec 05 10:42:44 crc kubenswrapper[4796]: I1205 10:42:44.924549 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f7bc-account-create-mgfrs" event={"ID":"7c6959f5-c89c-47a0-ac74-03e207adc303","Type":"ContainerDied","Data":"dce665f6bbe16b4036f2c0f4c1773649d9f17af91474ea178f951a42c0d8a1bb"} Dec 05 10:42:44 crc kubenswrapper[4796]: I1205 10:42:44.924777 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f7bc-account-create-mgfrs" event={"ID":"7c6959f5-c89c-47a0-ac74-03e207adc303","Type":"ContainerStarted","Data":"38a9a27b8cfc078bd968d2dced5c67bd925c81aac2fad5ac1647c3a802881e31"} Dec 05 10:42:44 crc kubenswrapper[4796]: I1205 10:42:44.926131 4796 generic.go:334] "Generic (PLEG): container finished" podID="d42b0029-cd05-4ffc-99cc-b6230c464e58" containerID="9eb9b47e90f2a6bb10fdfcda2dd134226808b10dab1880334da62cca21e83c9b" exitCode=0 Dec 05 10:42:44 crc kubenswrapper[4796]: I1205 10:42:44.926166 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-564f-account-create-pxxp8" event={"ID":"d42b0029-cd05-4ffc-99cc-b6230c464e58","Type":"ContainerDied","Data":"9eb9b47e90f2a6bb10fdfcda2dd134226808b10dab1880334da62cca21e83c9b"} Dec 05 10:42:44 crc kubenswrapper[4796]: I1205 10:42:44.926203 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-564f-account-create-pxxp8" event={"ID":"d42b0029-cd05-4ffc-99cc-b6230c464e58","Type":"ContainerStarted","Data":"32b288c2ae9512fd1b591df6c019f0c6e1cd4d7b0aef3ba0a6273f4cb510ec75"} Dec 05 10:42:44 crc kubenswrapper[4796]: I1205 10:42:44.927663 4796 generic.go:334] "Generic (PLEG): container finished" podID="7cf65f7c-cddb-463c-9be0-939961c5e902" containerID="2e560f5cb836fd7323df6a132ee6cf2300f605427c9f76d527575ec557f06325" exitCode=0 Dec 05 10:42:44 crc kubenswrapper[4796]: I1205 10:42:44.927707 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4868-account-create-l5rm9" event={"ID":"7cf65f7c-cddb-463c-9be0-939961c5e902","Type":"ContainerDied","Data":"2e560f5cb836fd7323df6a132ee6cf2300f605427c9f76d527575ec557f06325"} Dec 05 10:42:44 crc kubenswrapper[4796]: I1205 10:42:44.927749 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4868-account-create-l5rm9" event={"ID":"7cf65f7c-cddb-463c-9be0-939961c5e902","Type":"ContainerStarted","Data":"9563a65121535264708ecd2a3fe843bccb7c2c0851e7abd53ef95ca7230f2c57"} Dec 05 10:42:46 crc kubenswrapper[4796]: I1205 10:42:46.215122 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f7bc-account-create-mgfrs" Dec 05 10:42:46 crc kubenswrapper[4796]: I1205 10:42:46.330809 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g25s8\" (UniqueName: \"kubernetes.io/projected/7c6959f5-c89c-47a0-ac74-03e207adc303-kube-api-access-g25s8\") pod \"7c6959f5-c89c-47a0-ac74-03e207adc303\" (UID: \"7c6959f5-c89c-47a0-ac74-03e207adc303\") " Dec 05 10:42:46 crc kubenswrapper[4796]: I1205 10:42:46.336544 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c6959f5-c89c-47a0-ac74-03e207adc303-kube-api-access-g25s8" (OuterVolumeSpecName: "kube-api-access-g25s8") pod "7c6959f5-c89c-47a0-ac74-03e207adc303" (UID: "7c6959f5-c89c-47a0-ac74-03e207adc303"). InnerVolumeSpecName "kube-api-access-g25s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:42:46 crc kubenswrapper[4796]: I1205 10:42:46.339591 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4868-account-create-l5rm9" Dec 05 10:42:46 crc kubenswrapper[4796]: I1205 10:42:46.379270 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-564f-account-create-pxxp8" Dec 05 10:42:46 crc kubenswrapper[4796]: I1205 10:42:46.433237 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4frp\" (UniqueName: \"kubernetes.io/projected/7cf65f7c-cddb-463c-9be0-939961c5e902-kube-api-access-d4frp\") pod \"7cf65f7c-cddb-463c-9be0-939961c5e902\" (UID: \"7cf65f7c-cddb-463c-9be0-939961c5e902\") " Dec 05 10:42:46 crc kubenswrapper[4796]: I1205 10:42:46.433359 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkjdr\" (UniqueName: \"kubernetes.io/projected/d42b0029-cd05-4ffc-99cc-b6230c464e58-kube-api-access-qkjdr\") pod \"d42b0029-cd05-4ffc-99cc-b6230c464e58\" (UID: \"d42b0029-cd05-4ffc-99cc-b6230c464e58\") " Dec 05 10:42:46 crc kubenswrapper[4796]: I1205 10:42:46.434013 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g25s8\" (UniqueName: \"kubernetes.io/projected/7c6959f5-c89c-47a0-ac74-03e207adc303-kube-api-access-g25s8\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:46 crc kubenswrapper[4796]: I1205 10:42:46.436892 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d42b0029-cd05-4ffc-99cc-b6230c464e58-kube-api-access-qkjdr" (OuterVolumeSpecName: "kube-api-access-qkjdr") pod "d42b0029-cd05-4ffc-99cc-b6230c464e58" (UID: "d42b0029-cd05-4ffc-99cc-b6230c464e58"). InnerVolumeSpecName "kube-api-access-qkjdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:42:46 crc kubenswrapper[4796]: I1205 10:42:46.437068 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cf65f7c-cddb-463c-9be0-939961c5e902-kube-api-access-d4frp" (OuterVolumeSpecName: "kube-api-access-d4frp") pod "7cf65f7c-cddb-463c-9be0-939961c5e902" (UID: "7cf65f7c-cddb-463c-9be0-939961c5e902"). InnerVolumeSpecName "kube-api-access-d4frp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:42:46 crc kubenswrapper[4796]: I1205 10:42:46.536369 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4frp\" (UniqueName: \"kubernetes.io/projected/7cf65f7c-cddb-463c-9be0-939961c5e902-kube-api-access-d4frp\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:46 crc kubenswrapper[4796]: I1205 10:42:46.536411 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkjdr\" (UniqueName: \"kubernetes.io/projected/d42b0029-cd05-4ffc-99cc-b6230c464e58-kube-api-access-qkjdr\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:46 crc kubenswrapper[4796]: I1205 10:42:46.944461 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4868-account-create-l5rm9" event={"ID":"7cf65f7c-cddb-463c-9be0-939961c5e902","Type":"ContainerDied","Data":"9563a65121535264708ecd2a3fe843bccb7c2c0851e7abd53ef95ca7230f2c57"} Dec 05 10:42:46 crc kubenswrapper[4796]: I1205 10:42:46.944793 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9563a65121535264708ecd2a3fe843bccb7c2c0851e7abd53ef95ca7230f2c57" Dec 05 10:42:46 crc kubenswrapper[4796]: I1205 10:42:46.944736 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4868-account-create-l5rm9" Dec 05 10:42:46 crc kubenswrapper[4796]: I1205 10:42:46.946137 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f7bc-account-create-mgfrs" event={"ID":"7c6959f5-c89c-47a0-ac74-03e207adc303","Type":"ContainerDied","Data":"38a9a27b8cfc078bd968d2dced5c67bd925c81aac2fad5ac1647c3a802881e31"} Dec 05 10:42:46 crc kubenswrapper[4796]: I1205 10:42:46.946176 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38a9a27b8cfc078bd968d2dced5c67bd925c81aac2fad5ac1647c3a802881e31" Dec 05 10:42:46 crc kubenswrapper[4796]: I1205 10:42:46.946215 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f7bc-account-create-mgfrs" Dec 05 10:42:46 crc kubenswrapper[4796]: I1205 10:42:46.955354 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-564f-account-create-pxxp8" event={"ID":"d42b0029-cd05-4ffc-99cc-b6230c464e58","Type":"ContainerDied","Data":"32b288c2ae9512fd1b591df6c019f0c6e1cd4d7b0aef3ba0a6273f4cb510ec75"} Dec 05 10:42:46 crc kubenswrapper[4796]: I1205 10:42:46.955382 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32b288c2ae9512fd1b591df6c019f0c6e1cd4d7b0aef3ba0a6273f4cb510ec75" Dec 05 10:42:46 crc kubenswrapper[4796]: I1205 10:42:46.955427 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-564f-account-create-pxxp8" Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.759517 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ptbpr"] Dec 05 10:42:48 crc kubenswrapper[4796]: E1205 10:42:48.761210 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c6959f5-c89c-47a0-ac74-03e207adc303" containerName="mariadb-account-create" Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.761307 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c6959f5-c89c-47a0-ac74-03e207adc303" containerName="mariadb-account-create" Dec 05 10:42:48 crc kubenswrapper[4796]: E1205 10:42:48.761410 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cf65f7c-cddb-463c-9be0-939961c5e902" containerName="mariadb-account-create" Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.762423 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cf65f7c-cddb-463c-9be0-939961c5e902" containerName="mariadb-account-create" Dec 05 10:42:48 crc kubenswrapper[4796]: E1205 10:42:48.762513 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d42b0029-cd05-4ffc-99cc-b6230c464e58" containerName="mariadb-account-create" Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.762578 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d42b0029-cd05-4ffc-99cc-b6230c464e58" containerName="mariadb-account-create" Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.763288 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c6959f5-c89c-47a0-ac74-03e207adc303" containerName="mariadb-account-create" Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.763371 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cf65f7c-cddb-463c-9be0-939961c5e902" containerName="mariadb-account-create" Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.763437 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d42b0029-cd05-4ffc-99cc-b6230c464e58" containerName="mariadb-account-create" Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.764263 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ptbpr" Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.766402 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.766618 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-tqvhl" Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.766944 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ptbpr"] Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.778380 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.879746 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h79ls\" (UniqueName: \"kubernetes.io/projected/9ae7419c-e111-4405-8ed0-90f518e557d8-kube-api-access-h79ls\") pod \"nova-cell0-conductor-db-sync-ptbpr\" (UID: \"9ae7419c-e111-4405-8ed0-90f518e557d8\") " pod="openstack/nova-cell0-conductor-db-sync-ptbpr" Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.880131 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ae7419c-e111-4405-8ed0-90f518e557d8-scripts\") pod \"nova-cell0-conductor-db-sync-ptbpr\" (UID: \"9ae7419c-e111-4405-8ed0-90f518e557d8\") " pod="openstack/nova-cell0-conductor-db-sync-ptbpr" Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.880237 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae7419c-e111-4405-8ed0-90f518e557d8-config-data\") pod \"nova-cell0-conductor-db-sync-ptbpr\" (UID: \"9ae7419c-e111-4405-8ed0-90f518e557d8\") " pod="openstack/nova-cell0-conductor-db-sync-ptbpr" Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.880376 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ae7419c-e111-4405-8ed0-90f518e557d8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ptbpr\" (UID: \"9ae7419c-e111-4405-8ed0-90f518e557d8\") " pod="openstack/nova-cell0-conductor-db-sync-ptbpr" Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.982020 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h79ls\" (UniqueName: \"kubernetes.io/projected/9ae7419c-e111-4405-8ed0-90f518e557d8-kube-api-access-h79ls\") pod \"nova-cell0-conductor-db-sync-ptbpr\" (UID: \"9ae7419c-e111-4405-8ed0-90f518e557d8\") " pod="openstack/nova-cell0-conductor-db-sync-ptbpr" Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.982142 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ae7419c-e111-4405-8ed0-90f518e557d8-scripts\") pod \"nova-cell0-conductor-db-sync-ptbpr\" (UID: \"9ae7419c-e111-4405-8ed0-90f518e557d8\") " pod="openstack/nova-cell0-conductor-db-sync-ptbpr" Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.982174 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae7419c-e111-4405-8ed0-90f518e557d8-config-data\") pod \"nova-cell0-conductor-db-sync-ptbpr\" (UID: \"9ae7419c-e111-4405-8ed0-90f518e557d8\") " pod="openstack/nova-cell0-conductor-db-sync-ptbpr" Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.982237 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ae7419c-e111-4405-8ed0-90f518e557d8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ptbpr\" (UID: \"9ae7419c-e111-4405-8ed0-90f518e557d8\") " pod="openstack/nova-cell0-conductor-db-sync-ptbpr" Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.988728 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae7419c-e111-4405-8ed0-90f518e557d8-config-data\") pod \"nova-cell0-conductor-db-sync-ptbpr\" (UID: \"9ae7419c-e111-4405-8ed0-90f518e557d8\") " pod="openstack/nova-cell0-conductor-db-sync-ptbpr" Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.989064 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ae7419c-e111-4405-8ed0-90f518e557d8-scripts\") pod \"nova-cell0-conductor-db-sync-ptbpr\" (UID: \"9ae7419c-e111-4405-8ed0-90f518e557d8\") " pod="openstack/nova-cell0-conductor-db-sync-ptbpr" Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.992661 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ae7419c-e111-4405-8ed0-90f518e557d8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-ptbpr\" (UID: \"9ae7419c-e111-4405-8ed0-90f518e557d8\") " pod="openstack/nova-cell0-conductor-db-sync-ptbpr" Dec 05 10:42:48 crc kubenswrapper[4796]: I1205 10:42:48.995934 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h79ls\" (UniqueName: \"kubernetes.io/projected/9ae7419c-e111-4405-8ed0-90f518e557d8-kube-api-access-h79ls\") pod \"nova-cell0-conductor-db-sync-ptbpr\" (UID: \"9ae7419c-e111-4405-8ed0-90f518e557d8\") " pod="openstack/nova-cell0-conductor-db-sync-ptbpr" Dec 05 10:42:49 crc kubenswrapper[4796]: I1205 10:42:49.085464 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ptbpr" Dec 05 10:42:49 crc kubenswrapper[4796]: I1205 10:42:49.488540 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ptbpr"] Dec 05 10:42:49 crc kubenswrapper[4796]: I1205 10:42:49.983532 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ptbpr" event={"ID":"9ae7419c-e111-4405-8ed0-90f518e557d8","Type":"ContainerStarted","Data":"f802ffc9c33cd2b69ab810d32dc0a1b3c6bdceb73c58c4ff907e9b4db43d4ef7"} Dec 05 10:42:56 crc kubenswrapper[4796]: I1205 10:42:56.047910 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ptbpr" event={"ID":"9ae7419c-e111-4405-8ed0-90f518e557d8","Type":"ContainerStarted","Data":"e479a14ed99eb8d50f49b02aa904852105d4ad6e766e5ed1d765881bf4f751e2"} Dec 05 10:42:56 crc kubenswrapper[4796]: I1205 10:42:56.077772 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-ptbpr" podStartSLOduration=2.5429530590000002 podStartE2EDuration="8.077753283s" podCreationTimestamp="2025-12-05 10:42:48 +0000 UTC" firstStartedPulling="2025-12-05 10:42:49.50675072 +0000 UTC m=+915.794856234" lastFinishedPulling="2025-12-05 10:42:55.041550945 +0000 UTC m=+921.329656458" observedRunningTime="2025-12-05 10:42:56.058506932 +0000 UTC m=+922.346612445" watchObservedRunningTime="2025-12-05 10:42:56.077753283 +0000 UTC m=+922.365858786" Dec 05 10:42:57 crc kubenswrapper[4796]: I1205 10:42:57.053415 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"914109be-f63c-4c84-87c3-a42943de51b0","Type":"ContainerStarted","Data":"ecbcad4670aac5554fb8d8c447b0964dde3804ac02a562514c520ab966740d31"} Dec 05 10:42:57 crc kubenswrapper[4796]: I1205 10:42:57.053573 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="914109be-f63c-4c84-87c3-a42943de51b0" containerName="ceilometer-central-agent" containerID="cri-o://84c8f63735962e7d5f71a788cc589b6c65a2cfe28b4f6d6922e25b881d953183" gracePeriod=30 Dec 05 10:42:57 crc kubenswrapper[4796]: I1205 10:42:57.053744 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="914109be-f63c-4c84-87c3-a42943de51b0" containerName="ceilometer-notification-agent" containerID="cri-o://9f55ccd4203bcdcb909cc60ab7ebc76dd62333824c5ec005f433aec87edb01c1" gracePeriod=30 Dec 05 10:42:57 crc kubenswrapper[4796]: I1205 10:42:57.053748 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="914109be-f63c-4c84-87c3-a42943de51b0" containerName="sg-core" containerID="cri-o://6d19c94ae18b5c90e20d92e8e68e3242084d519b6e48a73d8b24134c890c791f" gracePeriod=30 Dec 05 10:42:57 crc kubenswrapper[4796]: I1205 10:42:57.053796 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="914109be-f63c-4c84-87c3-a42943de51b0" containerName="proxy-httpd" containerID="cri-o://ecbcad4670aac5554fb8d8c447b0964dde3804ac02a562514c520ab966740d31" gracePeriod=30 Dec 05 10:42:57 crc kubenswrapper[4796]: I1205 10:42:57.073628 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7918295239999997 podStartE2EDuration="25.073615915s" podCreationTimestamp="2025-12-05 10:42:32 +0000 UTC" firstStartedPulling="2025-12-05 10:42:33.658154741 +0000 UTC m=+899.946260254" lastFinishedPulling="2025-12-05 10:42:55.939941131 +0000 UTC m=+922.228046645" observedRunningTime="2025-12-05 10:42:57.072543178 +0000 UTC m=+923.360648690" watchObservedRunningTime="2025-12-05 10:42:57.073615915 +0000 UTC m=+923.361721428" Dec 05 10:42:58 crc kubenswrapper[4796]: I1205 10:42:58.063102 4796 generic.go:334] "Generic (PLEG): container finished" podID="914109be-f63c-4c84-87c3-a42943de51b0" containerID="ecbcad4670aac5554fb8d8c447b0964dde3804ac02a562514c520ab966740d31" exitCode=0 Dec 05 10:42:58 crc kubenswrapper[4796]: I1205 10:42:58.063359 4796 generic.go:334] "Generic (PLEG): container finished" podID="914109be-f63c-4c84-87c3-a42943de51b0" containerID="6d19c94ae18b5c90e20d92e8e68e3242084d519b6e48a73d8b24134c890c791f" exitCode=2 Dec 05 10:42:58 crc kubenswrapper[4796]: I1205 10:42:58.063367 4796 generic.go:334] "Generic (PLEG): container finished" podID="914109be-f63c-4c84-87c3-a42943de51b0" containerID="84c8f63735962e7d5f71a788cc589b6c65a2cfe28b4f6d6922e25b881d953183" exitCode=0 Dec 05 10:42:58 crc kubenswrapper[4796]: I1205 10:42:58.063131 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"914109be-f63c-4c84-87c3-a42943de51b0","Type":"ContainerDied","Data":"ecbcad4670aac5554fb8d8c447b0964dde3804ac02a562514c520ab966740d31"} Dec 05 10:42:58 crc kubenswrapper[4796]: I1205 10:42:58.063397 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"914109be-f63c-4c84-87c3-a42943de51b0","Type":"ContainerDied","Data":"6d19c94ae18b5c90e20d92e8e68e3242084d519b6e48a73d8b24134c890c791f"} Dec 05 10:42:58 crc kubenswrapper[4796]: I1205 10:42:58.063410 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"914109be-f63c-4c84-87c3-a42943de51b0","Type":"ContainerDied","Data":"84c8f63735962e7d5f71a788cc589b6c65a2cfe28b4f6d6922e25b881d953183"} Dec 05 10:42:59 crc kubenswrapper[4796]: I1205 10:42:59.851356 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 10:42:59 crc kubenswrapper[4796]: I1205 10:42:59.988695 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-combined-ca-bundle\") pod \"914109be-f63c-4c84-87c3-a42943de51b0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " Dec 05 10:42:59 crc kubenswrapper[4796]: I1205 10:42:59.988762 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-sg-core-conf-yaml\") pod \"914109be-f63c-4c84-87c3-a42943de51b0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " Dec 05 10:42:59 crc kubenswrapper[4796]: I1205 10:42:59.988819 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-scripts\") pod \"914109be-f63c-4c84-87c3-a42943de51b0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " Dec 05 10:42:59 crc kubenswrapper[4796]: I1205 10:42:59.988864 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/914109be-f63c-4c84-87c3-a42943de51b0-log-httpd\") pod \"914109be-f63c-4c84-87c3-a42943de51b0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " Dec 05 10:42:59 crc kubenswrapper[4796]: I1205 10:42:59.988938 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/914109be-f63c-4c84-87c3-a42943de51b0-run-httpd\") pod \"914109be-f63c-4c84-87c3-a42943de51b0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " Dec 05 10:42:59 crc kubenswrapper[4796]: I1205 10:42:59.989007 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-config-data\") pod \"914109be-f63c-4c84-87c3-a42943de51b0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " Dec 05 10:42:59 crc kubenswrapper[4796]: I1205 10:42:59.989027 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbkzq\" (UniqueName: \"kubernetes.io/projected/914109be-f63c-4c84-87c3-a42943de51b0-kube-api-access-sbkzq\") pod \"914109be-f63c-4c84-87c3-a42943de51b0\" (UID: \"914109be-f63c-4c84-87c3-a42943de51b0\") " Dec 05 10:42:59 crc kubenswrapper[4796]: I1205 10:42:59.989282 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/914109be-f63c-4c84-87c3-a42943de51b0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "914109be-f63c-4c84-87c3-a42943de51b0" (UID: "914109be-f63c-4c84-87c3-a42943de51b0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:42:59 crc kubenswrapper[4796]: I1205 10:42:59.989319 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/914109be-f63c-4c84-87c3-a42943de51b0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "914109be-f63c-4c84-87c3-a42943de51b0" (UID: "914109be-f63c-4c84-87c3-a42943de51b0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:42:59 crc kubenswrapper[4796]: I1205 10:42:59.989728 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/914109be-f63c-4c84-87c3-a42943de51b0-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:59 crc kubenswrapper[4796]: I1205 10:42:59.989745 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/914109be-f63c-4c84-87c3-a42943de51b0-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 10:42:59 crc kubenswrapper[4796]: I1205 10:42:59.993781 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/914109be-f63c-4c84-87c3-a42943de51b0-kube-api-access-sbkzq" (OuterVolumeSpecName: "kube-api-access-sbkzq") pod "914109be-f63c-4c84-87c3-a42943de51b0" (UID: "914109be-f63c-4c84-87c3-a42943de51b0"). InnerVolumeSpecName "kube-api-access-sbkzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:42:59 crc kubenswrapper[4796]: I1205 10:42:59.993903 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-scripts" (OuterVolumeSpecName: "scripts") pod "914109be-f63c-4c84-87c3-a42943de51b0" (UID: "914109be-f63c-4c84-87c3-a42943de51b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.011013 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "914109be-f63c-4c84-87c3-a42943de51b0" (UID: "914109be-f63c-4c84-87c3-a42943de51b0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.041200 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "914109be-f63c-4c84-87c3-a42943de51b0" (UID: "914109be-f63c-4c84-87c3-a42943de51b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.060867 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-config-data" (OuterVolumeSpecName: "config-data") pod "914109be-f63c-4c84-87c3-a42943de51b0" (UID: "914109be-f63c-4c84-87c3-a42943de51b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.093475 4796 generic.go:334] "Generic (PLEG): container finished" podID="914109be-f63c-4c84-87c3-a42943de51b0" containerID="9f55ccd4203bcdcb909cc60ab7ebc76dd62333824c5ec005f433aec87edb01c1" exitCode=0 Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.093506 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.093523 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"914109be-f63c-4c84-87c3-a42943de51b0","Type":"ContainerDied","Data":"9f55ccd4203bcdcb909cc60ab7ebc76dd62333824c5ec005f433aec87edb01c1"} Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.093551 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"914109be-f63c-4c84-87c3-a42943de51b0","Type":"ContainerDied","Data":"9482e8f4b8443fb8f63ccd1a0889bdd36956fc3356bb0f467d2cc652c19b68a1"} Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.093569 4796 scope.go:117] "RemoveContainer" containerID="ecbcad4670aac5554fb8d8c447b0964dde3804ac02a562514c520ab966740d31" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.093587 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.093529 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.093663 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.093674 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/914109be-f63c-4c84-87c3-a42943de51b0-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.093704 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbkzq\" (UniqueName: \"kubernetes.io/projected/914109be-f63c-4c84-87c3-a42943de51b0-kube-api-access-sbkzq\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.122019 4796 scope.go:117] "RemoveContainer" containerID="6d19c94ae18b5c90e20d92e8e68e3242084d519b6e48a73d8b24134c890c791f" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.122301 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.129712 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.148503 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:43:00 crc kubenswrapper[4796]: E1205 10:43:00.148904 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914109be-f63c-4c84-87c3-a42943de51b0" containerName="sg-core" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.148916 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="914109be-f63c-4c84-87c3-a42943de51b0" containerName="sg-core" Dec 05 10:43:00 crc kubenswrapper[4796]: E1205 10:43:00.148929 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914109be-f63c-4c84-87c3-a42943de51b0" containerName="proxy-httpd" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.148934 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="914109be-f63c-4c84-87c3-a42943de51b0" containerName="proxy-httpd" Dec 05 10:43:00 crc kubenswrapper[4796]: E1205 10:43:00.148960 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914109be-f63c-4c84-87c3-a42943de51b0" containerName="ceilometer-central-agent" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.148966 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="914109be-f63c-4c84-87c3-a42943de51b0" containerName="ceilometer-central-agent" Dec 05 10:43:00 crc kubenswrapper[4796]: E1205 10:43:00.148977 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="914109be-f63c-4c84-87c3-a42943de51b0" containerName="ceilometer-notification-agent" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.148983 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="914109be-f63c-4c84-87c3-a42943de51b0" containerName="ceilometer-notification-agent" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.149081 4796 scope.go:117] "RemoveContainer" containerID="9f55ccd4203bcdcb909cc60ab7ebc76dd62333824c5ec005f433aec87edb01c1" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.149174 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="914109be-f63c-4c84-87c3-a42943de51b0" containerName="sg-core" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.149194 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="914109be-f63c-4c84-87c3-a42943de51b0" containerName="proxy-httpd" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.149649 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="914109be-f63c-4c84-87c3-a42943de51b0" containerName="ceilometer-central-agent" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.149664 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="914109be-f63c-4c84-87c3-a42943de51b0" containerName="ceilometer-notification-agent" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.151955 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.154111 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.154358 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.170915 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.181762 4796 scope.go:117] "RemoveContainer" containerID="84c8f63735962e7d5f71a788cc589b6c65a2cfe28b4f6d6922e25b881d953183" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.195481 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-config-data\") pod \"ceilometer-0\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.195542 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-scripts\") pod \"ceilometer-0\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.195598 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.195626 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f547873-a935-4345-8617-42aa6e55b2bd-log-httpd\") pod \"ceilometer-0\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.195752 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f547873-a935-4345-8617-42aa6e55b2bd-run-httpd\") pod \"ceilometer-0\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.195789 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.195864 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44jzw\" (UniqueName: \"kubernetes.io/projected/2f547873-a935-4345-8617-42aa6e55b2bd-kube-api-access-44jzw\") pod \"ceilometer-0\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.197279 4796 scope.go:117] "RemoveContainer" containerID="ecbcad4670aac5554fb8d8c447b0964dde3804ac02a562514c520ab966740d31" Dec 05 10:43:00 crc kubenswrapper[4796]: E1205 10:43:00.197617 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecbcad4670aac5554fb8d8c447b0964dde3804ac02a562514c520ab966740d31\": container with ID starting with ecbcad4670aac5554fb8d8c447b0964dde3804ac02a562514c520ab966740d31 not found: ID does not exist" containerID="ecbcad4670aac5554fb8d8c447b0964dde3804ac02a562514c520ab966740d31" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.197643 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecbcad4670aac5554fb8d8c447b0964dde3804ac02a562514c520ab966740d31"} err="failed to get container status \"ecbcad4670aac5554fb8d8c447b0964dde3804ac02a562514c520ab966740d31\": rpc error: code = NotFound desc = could not find container \"ecbcad4670aac5554fb8d8c447b0964dde3804ac02a562514c520ab966740d31\": container with ID starting with ecbcad4670aac5554fb8d8c447b0964dde3804ac02a562514c520ab966740d31 not found: ID does not exist" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.197663 4796 scope.go:117] "RemoveContainer" containerID="6d19c94ae18b5c90e20d92e8e68e3242084d519b6e48a73d8b24134c890c791f" Dec 05 10:43:00 crc kubenswrapper[4796]: E1205 10:43:00.197933 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d19c94ae18b5c90e20d92e8e68e3242084d519b6e48a73d8b24134c890c791f\": container with ID starting with 6d19c94ae18b5c90e20d92e8e68e3242084d519b6e48a73d8b24134c890c791f not found: ID does not exist" containerID="6d19c94ae18b5c90e20d92e8e68e3242084d519b6e48a73d8b24134c890c791f" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.197955 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d19c94ae18b5c90e20d92e8e68e3242084d519b6e48a73d8b24134c890c791f"} err="failed to get container status \"6d19c94ae18b5c90e20d92e8e68e3242084d519b6e48a73d8b24134c890c791f\": rpc error: code = NotFound desc = could not find container \"6d19c94ae18b5c90e20d92e8e68e3242084d519b6e48a73d8b24134c890c791f\": container with ID starting with 6d19c94ae18b5c90e20d92e8e68e3242084d519b6e48a73d8b24134c890c791f not found: ID does not exist" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.197968 4796 scope.go:117] "RemoveContainer" containerID="9f55ccd4203bcdcb909cc60ab7ebc76dd62333824c5ec005f433aec87edb01c1" Dec 05 10:43:00 crc kubenswrapper[4796]: E1205 10:43:00.198319 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f55ccd4203bcdcb909cc60ab7ebc76dd62333824c5ec005f433aec87edb01c1\": container with ID starting with 9f55ccd4203bcdcb909cc60ab7ebc76dd62333824c5ec005f433aec87edb01c1 not found: ID does not exist" containerID="9f55ccd4203bcdcb909cc60ab7ebc76dd62333824c5ec005f433aec87edb01c1" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.198348 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f55ccd4203bcdcb909cc60ab7ebc76dd62333824c5ec005f433aec87edb01c1"} err="failed to get container status \"9f55ccd4203bcdcb909cc60ab7ebc76dd62333824c5ec005f433aec87edb01c1\": rpc error: code = NotFound desc = could not find container \"9f55ccd4203bcdcb909cc60ab7ebc76dd62333824c5ec005f433aec87edb01c1\": container with ID starting with 9f55ccd4203bcdcb909cc60ab7ebc76dd62333824c5ec005f433aec87edb01c1 not found: ID does not exist" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.198371 4796 scope.go:117] "RemoveContainer" containerID="84c8f63735962e7d5f71a788cc589b6c65a2cfe28b4f6d6922e25b881d953183" Dec 05 10:43:00 crc kubenswrapper[4796]: E1205 10:43:00.198609 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84c8f63735962e7d5f71a788cc589b6c65a2cfe28b4f6d6922e25b881d953183\": container with ID starting with 84c8f63735962e7d5f71a788cc589b6c65a2cfe28b4f6d6922e25b881d953183 not found: ID does not exist" containerID="84c8f63735962e7d5f71a788cc589b6c65a2cfe28b4f6d6922e25b881d953183" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.198635 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84c8f63735962e7d5f71a788cc589b6c65a2cfe28b4f6d6922e25b881d953183"} err="failed to get container status \"84c8f63735962e7d5f71a788cc589b6c65a2cfe28b4f6d6922e25b881d953183\": rpc error: code = NotFound desc = could not find container \"84c8f63735962e7d5f71a788cc589b6c65a2cfe28b4f6d6922e25b881d953183\": container with ID starting with 84c8f63735962e7d5f71a788cc589b6c65a2cfe28b4f6d6922e25b881d953183 not found: ID does not exist" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.296793 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-config-data\") pod \"ceilometer-0\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.296843 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-scripts\") pod \"ceilometer-0\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.296871 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.296887 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f547873-a935-4345-8617-42aa6e55b2bd-log-httpd\") pod \"ceilometer-0\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.296951 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f547873-a935-4345-8617-42aa6e55b2bd-run-httpd\") pod \"ceilometer-0\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.296965 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.297003 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44jzw\" (UniqueName: \"kubernetes.io/projected/2f547873-a935-4345-8617-42aa6e55b2bd-kube-api-access-44jzw\") pod \"ceilometer-0\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.297349 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f547873-a935-4345-8617-42aa6e55b2bd-log-httpd\") pod \"ceilometer-0\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.297489 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f547873-a935-4345-8617-42aa6e55b2bd-run-httpd\") pod \"ceilometer-0\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.300283 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.300404 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-scripts\") pod \"ceilometer-0\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.300428 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.300774 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-config-data\") pod \"ceilometer-0\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.309534 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44jzw\" (UniqueName: \"kubernetes.io/projected/2f547873-a935-4345-8617-42aa6e55b2bd-kube-api-access-44jzw\") pod \"ceilometer-0\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.469386 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 10:43:00 crc kubenswrapper[4796]: I1205 10:43:00.837127 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:43:00 crc kubenswrapper[4796]: W1205 10:43:00.907254 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f547873_a935_4345_8617_42aa6e55b2bd.slice/crio-fbcc71120da16af95ee01e3d4c6cf406e2ad26c434dc26b8e29722ee6c72bfed WatchSource:0}: Error finding container fbcc71120da16af95ee01e3d4c6cf406e2ad26c434dc26b8e29722ee6c72bfed: Status 404 returned error can't find the container with id fbcc71120da16af95ee01e3d4c6cf406e2ad26c434dc26b8e29722ee6c72bfed Dec 05 10:43:01 crc kubenswrapper[4796]: I1205 10:43:01.102567 4796 generic.go:334] "Generic (PLEG): container finished" podID="9ae7419c-e111-4405-8ed0-90f518e557d8" containerID="e479a14ed99eb8d50f49b02aa904852105d4ad6e766e5ed1d765881bf4f751e2" exitCode=0 Dec 05 10:43:01 crc kubenswrapper[4796]: I1205 10:43:01.102630 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ptbpr" event={"ID":"9ae7419c-e111-4405-8ed0-90f518e557d8","Type":"ContainerDied","Data":"e479a14ed99eb8d50f49b02aa904852105d4ad6e766e5ed1d765881bf4f751e2"} Dec 05 10:43:01 crc kubenswrapper[4796]: I1205 10:43:01.103490 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f547873-a935-4345-8617-42aa6e55b2bd","Type":"ContainerStarted","Data":"fbcc71120da16af95ee01e3d4c6cf406e2ad26c434dc26b8e29722ee6c72bfed"} Dec 05 10:43:02 crc kubenswrapper[4796]: I1205 10:43:02.039715 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="914109be-f63c-4c84-87c3-a42943de51b0" path="/var/lib/kubelet/pods/914109be-f63c-4c84-87c3-a42943de51b0/volumes" Dec 05 10:43:02 crc kubenswrapper[4796]: I1205 10:43:02.121140 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f547873-a935-4345-8617-42aa6e55b2bd","Type":"ContainerStarted","Data":"19b6d96a5ed87ec3a1484aa9b53a2d7aae984bd8cbda0df45ad0fb79f09258ce"} Dec 05 10:43:02 crc kubenswrapper[4796]: I1205 10:43:02.395543 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ptbpr" Dec 05 10:43:02 crc kubenswrapper[4796]: I1205 10:43:02.533593 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ae7419c-e111-4405-8ed0-90f518e557d8-combined-ca-bundle\") pod \"9ae7419c-e111-4405-8ed0-90f518e557d8\" (UID: \"9ae7419c-e111-4405-8ed0-90f518e557d8\") " Dec 05 10:43:02 crc kubenswrapper[4796]: I1205 10:43:02.533753 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h79ls\" (UniqueName: \"kubernetes.io/projected/9ae7419c-e111-4405-8ed0-90f518e557d8-kube-api-access-h79ls\") pod \"9ae7419c-e111-4405-8ed0-90f518e557d8\" (UID: \"9ae7419c-e111-4405-8ed0-90f518e557d8\") " Dec 05 10:43:02 crc kubenswrapper[4796]: I1205 10:43:02.533826 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ae7419c-e111-4405-8ed0-90f518e557d8-scripts\") pod \"9ae7419c-e111-4405-8ed0-90f518e557d8\" (UID: \"9ae7419c-e111-4405-8ed0-90f518e557d8\") " Dec 05 10:43:02 crc kubenswrapper[4796]: I1205 10:43:02.534313 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae7419c-e111-4405-8ed0-90f518e557d8-config-data\") pod \"9ae7419c-e111-4405-8ed0-90f518e557d8\" (UID: \"9ae7419c-e111-4405-8ed0-90f518e557d8\") " Dec 05 10:43:02 crc kubenswrapper[4796]: I1205 10:43:02.537481 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ae7419c-e111-4405-8ed0-90f518e557d8-scripts" (OuterVolumeSpecName: "scripts") pod "9ae7419c-e111-4405-8ed0-90f518e557d8" (UID: "9ae7419c-e111-4405-8ed0-90f518e557d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:02 crc kubenswrapper[4796]: I1205 10:43:02.537505 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ae7419c-e111-4405-8ed0-90f518e557d8-kube-api-access-h79ls" (OuterVolumeSpecName: "kube-api-access-h79ls") pod "9ae7419c-e111-4405-8ed0-90f518e557d8" (UID: "9ae7419c-e111-4405-8ed0-90f518e557d8"). InnerVolumeSpecName "kube-api-access-h79ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:43:02 crc kubenswrapper[4796]: I1205 10:43:02.553807 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ae7419c-e111-4405-8ed0-90f518e557d8-config-data" (OuterVolumeSpecName: "config-data") pod "9ae7419c-e111-4405-8ed0-90f518e557d8" (UID: "9ae7419c-e111-4405-8ed0-90f518e557d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:02 crc kubenswrapper[4796]: I1205 10:43:02.558833 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ae7419c-e111-4405-8ed0-90f518e557d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ae7419c-e111-4405-8ed0-90f518e557d8" (UID: "9ae7419c-e111-4405-8ed0-90f518e557d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:02 crc kubenswrapper[4796]: I1205 10:43:02.636118 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae7419c-e111-4405-8ed0-90f518e557d8-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:02 crc kubenswrapper[4796]: I1205 10:43:02.636145 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ae7419c-e111-4405-8ed0-90f518e557d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:02 crc kubenswrapper[4796]: I1205 10:43:02.636155 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h79ls\" (UniqueName: \"kubernetes.io/projected/9ae7419c-e111-4405-8ed0-90f518e557d8-kube-api-access-h79ls\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:02 crc kubenswrapper[4796]: I1205 10:43:02.636163 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ae7419c-e111-4405-8ed0-90f518e557d8-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:03 crc kubenswrapper[4796]: I1205 10:43:03.131091 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-ptbpr" Dec 05 10:43:03 crc kubenswrapper[4796]: I1205 10:43:03.131087 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-ptbpr" event={"ID":"9ae7419c-e111-4405-8ed0-90f518e557d8","Type":"ContainerDied","Data":"f802ffc9c33cd2b69ab810d32dc0a1b3c6bdceb73c58c4ff907e9b4db43d4ef7"} Dec 05 10:43:03 crc kubenswrapper[4796]: I1205 10:43:03.131419 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f802ffc9c33cd2b69ab810d32dc0a1b3c6bdceb73c58c4ff907e9b4db43d4ef7" Dec 05 10:43:03 crc kubenswrapper[4796]: I1205 10:43:03.133127 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f547873-a935-4345-8617-42aa6e55b2bd","Type":"ContainerStarted","Data":"719467e40937aaaab50d3c4a85b57847a7fead9e5df4e634da5107b12569b278"} Dec 05 10:43:03 crc kubenswrapper[4796]: I1205 10:43:03.133167 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f547873-a935-4345-8617-42aa6e55b2bd","Type":"ContainerStarted","Data":"c8f8e644ca0548a21a7b27a3e8cdf9491dcaced2185d01ef2e3715d4ebd397d6"} Dec 05 10:43:03 crc kubenswrapper[4796]: I1205 10:43:03.188995 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 10:43:03 crc kubenswrapper[4796]: E1205 10:43:03.189317 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae7419c-e111-4405-8ed0-90f518e557d8" containerName="nova-cell0-conductor-db-sync" Dec 05 10:43:03 crc kubenswrapper[4796]: I1205 10:43:03.189334 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae7419c-e111-4405-8ed0-90f518e557d8" containerName="nova-cell0-conductor-db-sync" Dec 05 10:43:03 crc kubenswrapper[4796]: I1205 10:43:03.189502 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ae7419c-e111-4405-8ed0-90f518e557d8" containerName="nova-cell0-conductor-db-sync" Dec 05 10:43:03 crc kubenswrapper[4796]: I1205 10:43:03.190067 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 05 10:43:03 crc kubenswrapper[4796]: I1205 10:43:03.191728 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-tqvhl" Dec 05 10:43:03 crc kubenswrapper[4796]: I1205 10:43:03.191905 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 05 10:43:03 crc kubenswrapper[4796]: I1205 10:43:03.198529 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 10:43:03 crc kubenswrapper[4796]: I1205 10:43:03.246151 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1096b18-a70c-4076-ac0c-0a57532ec40e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b1096b18-a70c-4076-ac0c-0a57532ec40e\") " pod="openstack/nova-cell0-conductor-0" Dec 05 10:43:03 crc kubenswrapper[4796]: I1205 10:43:03.246256 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bknm\" (UniqueName: \"kubernetes.io/projected/b1096b18-a70c-4076-ac0c-0a57532ec40e-kube-api-access-8bknm\") pod \"nova-cell0-conductor-0\" (UID: \"b1096b18-a70c-4076-ac0c-0a57532ec40e\") " pod="openstack/nova-cell0-conductor-0" Dec 05 10:43:03 crc kubenswrapper[4796]: I1205 10:43:03.246315 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1096b18-a70c-4076-ac0c-0a57532ec40e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b1096b18-a70c-4076-ac0c-0a57532ec40e\") " pod="openstack/nova-cell0-conductor-0" Dec 05 10:43:03 crc kubenswrapper[4796]: I1205 10:43:03.347769 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bknm\" (UniqueName: \"kubernetes.io/projected/b1096b18-a70c-4076-ac0c-0a57532ec40e-kube-api-access-8bknm\") pod \"nova-cell0-conductor-0\" (UID: \"b1096b18-a70c-4076-ac0c-0a57532ec40e\") " pod="openstack/nova-cell0-conductor-0" Dec 05 10:43:03 crc kubenswrapper[4796]: I1205 10:43:03.347854 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1096b18-a70c-4076-ac0c-0a57532ec40e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b1096b18-a70c-4076-ac0c-0a57532ec40e\") " pod="openstack/nova-cell0-conductor-0" Dec 05 10:43:03 crc kubenswrapper[4796]: I1205 10:43:03.348089 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1096b18-a70c-4076-ac0c-0a57532ec40e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b1096b18-a70c-4076-ac0c-0a57532ec40e\") " pod="openstack/nova-cell0-conductor-0" Dec 05 10:43:03 crc kubenswrapper[4796]: I1205 10:43:03.353179 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1096b18-a70c-4076-ac0c-0a57532ec40e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"b1096b18-a70c-4076-ac0c-0a57532ec40e\") " pod="openstack/nova-cell0-conductor-0" Dec 05 10:43:03 crc kubenswrapper[4796]: I1205 10:43:03.364464 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1096b18-a70c-4076-ac0c-0a57532ec40e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"b1096b18-a70c-4076-ac0c-0a57532ec40e\") " pod="openstack/nova-cell0-conductor-0" Dec 05 10:43:03 crc kubenswrapper[4796]: I1205 10:43:03.372090 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bknm\" (UniqueName: \"kubernetes.io/projected/b1096b18-a70c-4076-ac0c-0a57532ec40e-kube-api-access-8bknm\") pod \"nova-cell0-conductor-0\" (UID: \"b1096b18-a70c-4076-ac0c-0a57532ec40e\") " pod="openstack/nova-cell0-conductor-0" Dec 05 10:43:03 crc kubenswrapper[4796]: I1205 10:43:03.526277 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 05 10:43:03 crc kubenswrapper[4796]: I1205 10:43:03.914107 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 05 10:43:03 crc kubenswrapper[4796]: W1205 10:43:03.919922 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1096b18_a70c_4076_ac0c_0a57532ec40e.slice/crio-29494db9c564c49560f4da4d182bb36e84b163de47bd4281fa08fc319a36e6f5 WatchSource:0}: Error finding container 29494db9c564c49560f4da4d182bb36e84b163de47bd4281fa08fc319a36e6f5: Status 404 returned error can't find the container with id 29494db9c564c49560f4da4d182bb36e84b163de47bd4281fa08fc319a36e6f5 Dec 05 10:43:04 crc kubenswrapper[4796]: I1205 10:43:04.140482 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b1096b18-a70c-4076-ac0c-0a57532ec40e","Type":"ContainerStarted","Data":"bdee56ad74bebd49600155a20c8a287c06952a43830658f9cd93ba453886e0db"} Dec 05 10:43:04 crc kubenswrapper[4796]: I1205 10:43:04.140520 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"b1096b18-a70c-4076-ac0c-0a57532ec40e","Type":"ContainerStarted","Data":"29494db9c564c49560f4da4d182bb36e84b163de47bd4281fa08fc319a36e6f5"} Dec 05 10:43:04 crc kubenswrapper[4796]: I1205 10:43:04.140700 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 05 10:43:04 crc kubenswrapper[4796]: I1205 10:43:04.171349 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.171333869 podStartE2EDuration="1.171333869s" podCreationTimestamp="2025-12-05 10:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:43:04.166529761 +0000 UTC m=+930.454635273" watchObservedRunningTime="2025-12-05 10:43:04.171333869 +0000 UTC m=+930.459439382" Dec 05 10:43:05 crc kubenswrapper[4796]: I1205 10:43:05.151634 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f547873-a935-4345-8617-42aa6e55b2bd","Type":"ContainerStarted","Data":"f0c6e8b2ea1d3b7d4b2dbb43a071d2b4f58512dbe10728e7cae9af008cdbe94a"} Dec 05 10:43:05 crc kubenswrapper[4796]: I1205 10:43:05.152053 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 10:43:05 crc kubenswrapper[4796]: I1205 10:43:05.177264 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:43:05 crc kubenswrapper[4796]: I1205 10:43:05.177314 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:43:05 crc kubenswrapper[4796]: I1205 10:43:05.177388 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.954946232 podStartE2EDuration="5.177375383s" podCreationTimestamp="2025-12-05 10:43:00 +0000 UTC" firstStartedPulling="2025-12-05 10:43:00.909668879 +0000 UTC m=+927.197774392" lastFinishedPulling="2025-12-05 10:43:04.13209803 +0000 UTC m=+930.420203543" observedRunningTime="2025-12-05 10:43:05.175471068 +0000 UTC m=+931.463576582" watchObservedRunningTime="2025-12-05 10:43:05.177375383 +0000 UTC m=+931.465480895" Dec 05 10:43:13 crc kubenswrapper[4796]: I1205 10:43:13.546883 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 05 10:43:13 crc kubenswrapper[4796]: I1205 10:43:13.937582 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-rkwvn"] Dec 05 10:43:13 crc kubenswrapper[4796]: I1205 10:43:13.938768 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rkwvn" Dec 05 10:43:13 crc kubenswrapper[4796]: I1205 10:43:13.940752 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 05 10:43:13 crc kubenswrapper[4796]: I1205 10:43:13.942677 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 05 10:43:13 crc kubenswrapper[4796]: I1205 10:43:13.943878 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rkwvn"] Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.019642 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/912d5bbd-238c-49b9-ace1-e201dead6822-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rkwvn\" (UID: \"912d5bbd-238c-49b9-ace1-e201dead6822\") " pod="openstack/nova-cell0-cell-mapping-rkwvn" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.019762 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/912d5bbd-238c-49b9-ace1-e201dead6822-scripts\") pod \"nova-cell0-cell-mapping-rkwvn\" (UID: \"912d5bbd-238c-49b9-ace1-e201dead6822\") " pod="openstack/nova-cell0-cell-mapping-rkwvn" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.019786 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62g6r\" (UniqueName: \"kubernetes.io/projected/912d5bbd-238c-49b9-ace1-e201dead6822-kube-api-access-62g6r\") pod \"nova-cell0-cell-mapping-rkwvn\" (UID: \"912d5bbd-238c-49b9-ace1-e201dead6822\") " pod="openstack/nova-cell0-cell-mapping-rkwvn" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.019833 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/912d5bbd-238c-49b9-ace1-e201dead6822-config-data\") pod \"nova-cell0-cell-mapping-rkwvn\" (UID: \"912d5bbd-238c-49b9-ace1-e201dead6822\") " pod="openstack/nova-cell0-cell-mapping-rkwvn" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.069459 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.073995 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.075724 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.085026 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.124644 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/912d5bbd-238c-49b9-ace1-e201dead6822-scripts\") pod \"nova-cell0-cell-mapping-rkwvn\" (UID: \"912d5bbd-238c-49b9-ace1-e201dead6822\") " pod="openstack/nova-cell0-cell-mapping-rkwvn" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.124702 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62g6r\" (UniqueName: \"kubernetes.io/projected/912d5bbd-238c-49b9-ace1-e201dead6822-kube-api-access-62g6r\") pod \"nova-cell0-cell-mapping-rkwvn\" (UID: \"912d5bbd-238c-49b9-ace1-e201dead6822\") " pod="openstack/nova-cell0-cell-mapping-rkwvn" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.124774 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/912d5bbd-238c-49b9-ace1-e201dead6822-config-data\") pod \"nova-cell0-cell-mapping-rkwvn\" (UID: \"912d5bbd-238c-49b9-ace1-e201dead6822\") " pod="openstack/nova-cell0-cell-mapping-rkwvn" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.124936 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/912d5bbd-238c-49b9-ace1-e201dead6822-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rkwvn\" (UID: \"912d5bbd-238c-49b9-ace1-e201dead6822\") " pod="openstack/nova-cell0-cell-mapping-rkwvn" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.142636 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/912d5bbd-238c-49b9-ace1-e201dead6822-config-data\") pod \"nova-cell0-cell-mapping-rkwvn\" (UID: \"912d5bbd-238c-49b9-ace1-e201dead6822\") " pod="openstack/nova-cell0-cell-mapping-rkwvn" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.143195 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/912d5bbd-238c-49b9-ace1-e201dead6822-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rkwvn\" (UID: \"912d5bbd-238c-49b9-ace1-e201dead6822\") " pod="openstack/nova-cell0-cell-mapping-rkwvn" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.148051 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/912d5bbd-238c-49b9-ace1-e201dead6822-scripts\") pod \"nova-cell0-cell-mapping-rkwvn\" (UID: \"912d5bbd-238c-49b9-ace1-e201dead6822\") " pod="openstack/nova-cell0-cell-mapping-rkwvn" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.164051 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62g6r\" (UniqueName: \"kubernetes.io/projected/912d5bbd-238c-49b9-ace1-e201dead6822-kube-api-access-62g6r\") pod \"nova-cell0-cell-mapping-rkwvn\" (UID: \"912d5bbd-238c-49b9-ace1-e201dead6822\") " pod="openstack/nova-cell0-cell-mapping-rkwvn" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.169891 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.171629 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.179057 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.204892 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.219739 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.221027 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.225812 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.237012 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-config-data\") pod \"nova-api-0\" (UID: \"c2353dfd-80a8-438b-bc21-3cc6c4800d8f\") " pod="openstack/nova-api-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.237052 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwzlf\" (UniqueName: \"kubernetes.io/projected/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-kube-api-access-mwzlf\") pod \"nova-metadata-0\" (UID: \"9bd708c2-2c8d-4cbc-98dd-36da5db1629b\") " pod="openstack/nova-metadata-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.237085 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9bd708c2-2c8d-4cbc-98dd-36da5db1629b\") " pod="openstack/nova-metadata-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.237156 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-config-data\") pod \"nova-metadata-0\" (UID: \"9bd708c2-2c8d-4cbc-98dd-36da5db1629b\") " pod="openstack/nova-metadata-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.237235 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c2353dfd-80a8-438b-bc21-3cc6c4800d8f\") " pod="openstack/nova-api-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.237318 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shj4g\" (UniqueName: \"kubernetes.io/projected/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-kube-api-access-shj4g\") pod \"nova-api-0\" (UID: \"c2353dfd-80a8-438b-bc21-3cc6c4800d8f\") " pod="openstack/nova-api-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.237354 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-logs\") pod \"nova-api-0\" (UID: \"c2353dfd-80a8-438b-bc21-3cc6c4800d8f\") " pod="openstack/nova-api-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.237369 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-logs\") pod \"nova-metadata-0\" (UID: \"9bd708c2-2c8d-4cbc-98dd-36da5db1629b\") " pod="openstack/nova-metadata-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.245628 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.254721 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rkwvn" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.288781 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c4475fdfc-65svm"] Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.290148 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.347294 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c4475fdfc-65svm"] Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.349049 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwzlf\" (UniqueName: \"kubernetes.io/projected/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-kube-api-access-mwzlf\") pod \"nova-metadata-0\" (UID: \"9bd708c2-2c8d-4cbc-98dd-36da5db1629b\") " pod="openstack/nova-metadata-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.349086 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-config-data\") pod \"nova-api-0\" (UID: \"c2353dfd-80a8-438b-bc21-3cc6c4800d8f\") " pod="openstack/nova-api-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.349117 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9bd708c2-2c8d-4cbc-98dd-36da5db1629b\") " pod="openstack/nova-metadata-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.349165 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k77k8\" (UniqueName: \"kubernetes.io/projected/69f827ad-08b4-4d78-869a-f1263b94266e-kube-api-access-k77k8\") pod \"nova-scheduler-0\" (UID: \"69f827ad-08b4-4d78-869a-f1263b94266e\") " pod="openstack/nova-scheduler-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.349188 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-config-data\") pod \"nova-metadata-0\" (UID: \"9bd708c2-2c8d-4cbc-98dd-36da5db1629b\") " pod="openstack/nova-metadata-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.349235 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c2353dfd-80a8-438b-bc21-3cc6c4800d8f\") " pod="openstack/nova-api-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.349268 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shj4g\" (UniqueName: \"kubernetes.io/projected/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-kube-api-access-shj4g\") pod \"nova-api-0\" (UID: \"c2353dfd-80a8-438b-bc21-3cc6c4800d8f\") " pod="openstack/nova-api-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.349296 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-logs\") pod \"nova-api-0\" (UID: \"c2353dfd-80a8-438b-bc21-3cc6c4800d8f\") " pod="openstack/nova-api-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.349322 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-logs\") pod \"nova-metadata-0\" (UID: \"9bd708c2-2c8d-4cbc-98dd-36da5db1629b\") " pod="openstack/nova-metadata-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.349359 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69f827ad-08b4-4d78-869a-f1263b94266e-config-data\") pod \"nova-scheduler-0\" (UID: \"69f827ad-08b4-4d78-869a-f1263b94266e\") " pod="openstack/nova-scheduler-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.349386 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f827ad-08b4-4d78-869a-f1263b94266e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"69f827ad-08b4-4d78-869a-f1263b94266e\") " pod="openstack/nova-scheduler-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.357468 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-config-data\") pod \"nova-api-0\" (UID: \"c2353dfd-80a8-438b-bc21-3cc6c4800d8f\") " pod="openstack/nova-api-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.357717 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-logs\") pod \"nova-api-0\" (UID: \"c2353dfd-80a8-438b-bc21-3cc6c4800d8f\") " pod="openstack/nova-api-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.359706 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-logs\") pod \"nova-metadata-0\" (UID: \"9bd708c2-2c8d-4cbc-98dd-36da5db1629b\") " pod="openstack/nova-metadata-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.369918 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c2353dfd-80a8-438b-bc21-3cc6c4800d8f\") " pod="openstack/nova-api-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.373142 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9bd708c2-2c8d-4cbc-98dd-36da5db1629b\") " pod="openstack/nova-metadata-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.382839 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-config-data\") pod \"nova-metadata-0\" (UID: \"9bd708c2-2c8d-4cbc-98dd-36da5db1629b\") " pod="openstack/nova-metadata-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.388458 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwzlf\" (UniqueName: \"kubernetes.io/projected/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-kube-api-access-mwzlf\") pod \"nova-metadata-0\" (UID: \"9bd708c2-2c8d-4cbc-98dd-36da5db1629b\") " pod="openstack/nova-metadata-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.390928 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shj4g\" (UniqueName: \"kubernetes.io/projected/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-kube-api-access-shj4g\") pod \"nova-api-0\" (UID: \"c2353dfd-80a8-438b-bc21-3cc6c4800d8f\") " pod="openstack/nova-api-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.392539 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.418521 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.420001 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.425045 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.431529 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.450360 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-config\") pod \"dnsmasq-dns-5c4475fdfc-65svm\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.450394 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctm9m\" (UniqueName: \"kubernetes.io/projected/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-kube-api-access-ctm9m\") pod \"dnsmasq-dns-5c4475fdfc-65svm\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.450422 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69f827ad-08b4-4d78-869a-f1263b94266e-config-data\") pod \"nova-scheduler-0\" (UID: \"69f827ad-08b4-4d78-869a-f1263b94266e\") " pod="openstack/nova-scheduler-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.450446 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-dns-swift-storage-0\") pod \"dnsmasq-dns-5c4475fdfc-65svm\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.450469 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f827ad-08b4-4d78-869a-f1263b94266e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"69f827ad-08b4-4d78-869a-f1263b94266e\") " pod="openstack/nova-scheduler-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.450501 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-ovsdbserver-sb\") pod \"dnsmasq-dns-5c4475fdfc-65svm\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.450547 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k77k8\" (UniqueName: \"kubernetes.io/projected/69f827ad-08b4-4d78-869a-f1263b94266e-kube-api-access-k77k8\") pod \"nova-scheduler-0\" (UID: \"69f827ad-08b4-4d78-869a-f1263b94266e\") " pod="openstack/nova-scheduler-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.450580 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-ovsdbserver-nb\") pod \"dnsmasq-dns-5c4475fdfc-65svm\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.450609 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-dns-svc\") pod \"dnsmasq-dns-5c4475fdfc-65svm\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.460255 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f827ad-08b4-4d78-869a-f1263b94266e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"69f827ad-08b4-4d78-869a-f1263b94266e\") " pod="openstack/nova-scheduler-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.461826 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69f827ad-08b4-4d78-869a-f1263b94266e-config-data\") pod \"nova-scheduler-0\" (UID: \"69f827ad-08b4-4d78-869a-f1263b94266e\") " pod="openstack/nova-scheduler-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.473028 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k77k8\" (UniqueName: \"kubernetes.io/projected/69f827ad-08b4-4d78-869a-f1263b94266e-kube-api-access-k77k8\") pod \"nova-scheduler-0\" (UID: \"69f827ad-08b4-4d78-869a-f1263b94266e\") " pod="openstack/nova-scheduler-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.546050 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.551641 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctm9m\" (UniqueName: \"kubernetes.io/projected/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-kube-api-access-ctm9m\") pod \"dnsmasq-dns-5c4475fdfc-65svm\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.551710 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-dns-swift-storage-0\") pod \"dnsmasq-dns-5c4475fdfc-65svm\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.551745 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b43aac5c-bb70-45de-99b4-6a992cf74202-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b43aac5c-bb70-45de-99b4-6a992cf74202\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.551790 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-ovsdbserver-sb\") pod \"dnsmasq-dns-5c4475fdfc-65svm\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.551824 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43aac5c-bb70-45de-99b4-6a992cf74202-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b43aac5c-bb70-45de-99b4-6a992cf74202\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.551860 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfzd5\" (UniqueName: \"kubernetes.io/projected/b43aac5c-bb70-45de-99b4-6a992cf74202-kube-api-access-lfzd5\") pod \"nova-cell1-novncproxy-0\" (UID: \"b43aac5c-bb70-45de-99b4-6a992cf74202\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.551917 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-ovsdbserver-nb\") pod \"dnsmasq-dns-5c4475fdfc-65svm\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.551962 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-dns-svc\") pod \"dnsmasq-dns-5c4475fdfc-65svm\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.552004 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-config\") pod \"dnsmasq-dns-5c4475fdfc-65svm\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.552606 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-dns-swift-storage-0\") pod \"dnsmasq-dns-5c4475fdfc-65svm\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.552918 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-config\") pod \"dnsmasq-dns-5c4475fdfc-65svm\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.553292 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-ovsdbserver-nb\") pod \"dnsmasq-dns-5c4475fdfc-65svm\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.553448 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-dns-svc\") pod \"dnsmasq-dns-5c4475fdfc-65svm\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.555917 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-ovsdbserver-sb\") pod \"dnsmasq-dns-5c4475fdfc-65svm\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.567778 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.569483 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctm9m\" (UniqueName: \"kubernetes.io/projected/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-kube-api-access-ctm9m\") pod \"dnsmasq-dns-5c4475fdfc-65svm\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.653158 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b43aac5c-bb70-45de-99b4-6a992cf74202-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b43aac5c-bb70-45de-99b4-6a992cf74202\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.653401 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43aac5c-bb70-45de-99b4-6a992cf74202-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b43aac5c-bb70-45de-99b4-6a992cf74202\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.653431 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfzd5\" (UniqueName: \"kubernetes.io/projected/b43aac5c-bb70-45de-99b4-6a992cf74202-kube-api-access-lfzd5\") pod \"nova-cell1-novncproxy-0\" (UID: \"b43aac5c-bb70-45de-99b4-6a992cf74202\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.657106 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43aac5c-bb70-45de-99b4-6a992cf74202-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b43aac5c-bb70-45de-99b4-6a992cf74202\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.662108 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b43aac5c-bb70-45de-99b4-6a992cf74202-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b43aac5c-bb70-45de-99b4-6a992cf74202\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.669975 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfzd5\" (UniqueName: \"kubernetes.io/projected/b43aac5c-bb70-45de-99b4-6a992cf74202-kube-api-access-lfzd5\") pod \"nova-cell1-novncproxy-0\" (UID: \"b43aac5c-bb70-45de-99b4-6a992cf74202\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.719290 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rkwvn"] Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.725279 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:14 crc kubenswrapper[4796]: W1205 10:43:14.734774 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod912d5bbd_238c_49b9_ace1_e201dead6822.slice/crio-2e30dc6f700a23f2011caf48063be2fce75647549f431f21aa888ea7a5758444 WatchSource:0}: Error finding container 2e30dc6f700a23f2011caf48063be2fce75647549f431f21aa888ea7a5758444: Status 404 returned error can't find the container with id 2e30dc6f700a23f2011caf48063be2fce75647549f431f21aa888ea7a5758444 Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.743153 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.835897 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 10:43:14 crc kubenswrapper[4796]: W1205 10:43:14.866904 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2353dfd_80a8_438b_bc21_3cc6c4800d8f.slice/crio-75acaaa3471607f11eda322d531864f9d2383503504b8ec02de22bc37c13a48a WatchSource:0}: Error finding container 75acaaa3471607f11eda322d531864f9d2383503504b8ec02de22bc37c13a48a: Status 404 returned error can't find the container with id 75acaaa3471607f11eda322d531864f9d2383503504b8ec02de22bc37c13a48a Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.869951 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-98wbd"] Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.870946 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-98wbd" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.874787 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.874946 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.889968 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-98wbd"] Dec 05 10:43:14 crc kubenswrapper[4796]: I1205 10:43:14.936540 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 10:43:14 crc kubenswrapper[4796]: W1205 10:43:14.937610 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bd708c2_2c8d_4cbc_98dd_36da5db1629b.slice/crio-3612ea1940a5b2e51f9e4e6ff6c2d2bfe734081da54d90a147c3e554cdacba0f WatchSource:0}: Error finding container 3612ea1940a5b2e51f9e4e6ff6c2d2bfe734081da54d90a147c3e554cdacba0f: Status 404 returned error can't find the container with id 3612ea1940a5b2e51f9e4e6ff6c2d2bfe734081da54d90a147c3e554cdacba0f Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.042830 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.060776 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c470255-330c-47ec-91f7-a566792f753f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-98wbd\" (UID: \"0c470255-330c-47ec-91f7-a566792f753f\") " pod="openstack/nova-cell1-conductor-db-sync-98wbd" Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.060812 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnt89\" (UniqueName: \"kubernetes.io/projected/0c470255-330c-47ec-91f7-a566792f753f-kube-api-access-cnt89\") pod \"nova-cell1-conductor-db-sync-98wbd\" (UID: \"0c470255-330c-47ec-91f7-a566792f753f\") " pod="openstack/nova-cell1-conductor-db-sync-98wbd" Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.060966 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c470255-330c-47ec-91f7-a566792f753f-config-data\") pod \"nova-cell1-conductor-db-sync-98wbd\" (UID: \"0c470255-330c-47ec-91f7-a566792f753f\") " pod="openstack/nova-cell1-conductor-db-sync-98wbd" Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.061164 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c470255-330c-47ec-91f7-a566792f753f-scripts\") pod \"nova-cell1-conductor-db-sync-98wbd\" (UID: \"0c470255-330c-47ec-91f7-a566792f753f\") " pod="openstack/nova-cell1-conductor-db-sync-98wbd" Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.165911 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c4475fdfc-65svm"] Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.166993 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c470255-330c-47ec-91f7-a566792f753f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-98wbd\" (UID: \"0c470255-330c-47ec-91f7-a566792f753f\") " pod="openstack/nova-cell1-conductor-db-sync-98wbd" Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.167028 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnt89\" (UniqueName: \"kubernetes.io/projected/0c470255-330c-47ec-91f7-a566792f753f-kube-api-access-cnt89\") pod \"nova-cell1-conductor-db-sync-98wbd\" (UID: \"0c470255-330c-47ec-91f7-a566792f753f\") " pod="openstack/nova-cell1-conductor-db-sync-98wbd" Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.167192 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c470255-330c-47ec-91f7-a566792f753f-config-data\") pod \"nova-cell1-conductor-db-sync-98wbd\" (UID: \"0c470255-330c-47ec-91f7-a566792f753f\") " pod="openstack/nova-cell1-conductor-db-sync-98wbd" Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.167333 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c470255-330c-47ec-91f7-a566792f753f-scripts\") pod \"nova-cell1-conductor-db-sync-98wbd\" (UID: \"0c470255-330c-47ec-91f7-a566792f753f\") " pod="openstack/nova-cell1-conductor-db-sync-98wbd" Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.172152 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c470255-330c-47ec-91f7-a566792f753f-scripts\") pod \"nova-cell1-conductor-db-sync-98wbd\" (UID: \"0c470255-330c-47ec-91f7-a566792f753f\") " pod="openstack/nova-cell1-conductor-db-sync-98wbd" Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.173232 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c470255-330c-47ec-91f7-a566792f753f-config-data\") pod \"nova-cell1-conductor-db-sync-98wbd\" (UID: \"0c470255-330c-47ec-91f7-a566792f753f\") " pod="openstack/nova-cell1-conductor-db-sync-98wbd" Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.173713 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c470255-330c-47ec-91f7-a566792f753f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-98wbd\" (UID: \"0c470255-330c-47ec-91f7-a566792f753f\") " pod="openstack/nova-cell1-conductor-db-sync-98wbd" Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.193167 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnt89\" (UniqueName: \"kubernetes.io/projected/0c470255-330c-47ec-91f7-a566792f753f-kube-api-access-cnt89\") pod \"nova-cell1-conductor-db-sync-98wbd\" (UID: \"0c470255-330c-47ec-91f7-a566792f753f\") " pod="openstack/nova-cell1-conductor-db-sync-98wbd" Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.209712 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-98wbd" Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.240394 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" event={"ID":"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5","Type":"ContainerStarted","Data":"ae92d98d1273d5f33e6c873a275ed50a4d87c0feae4f2e6491a193c5e5d4324e"} Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.241864 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9bd708c2-2c8d-4cbc-98dd-36da5db1629b","Type":"ContainerStarted","Data":"3612ea1940a5b2e51f9e4e6ff6c2d2bfe734081da54d90a147c3e554cdacba0f"} Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.246220 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rkwvn" event={"ID":"912d5bbd-238c-49b9-ace1-e201dead6822","Type":"ContainerStarted","Data":"7cb0bdaa0e5a1d7891eadf4ac6b264c3d5e9f6d5a45484bb72fdb0b16552c8fd"} Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.246307 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rkwvn" event={"ID":"912d5bbd-238c-49b9-ace1-e201dead6822","Type":"ContainerStarted","Data":"2e30dc6f700a23f2011caf48063be2fce75647549f431f21aa888ea7a5758444"} Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.263858 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"69f827ad-08b4-4d78-869a-f1263b94266e","Type":"ContainerStarted","Data":"be201726bf4649f698b854cd4cdf9514ebbd7605ff19b9e50f76cb3664044c83"} Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.264957 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c2353dfd-80a8-438b-bc21-3cc6c4800d8f","Type":"ContainerStarted","Data":"75acaaa3471607f11eda322d531864f9d2383503504b8ec02de22bc37c13a48a"} Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.288001 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.288643 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-rkwvn" podStartSLOduration=2.288632257 podStartE2EDuration="2.288632257s" podCreationTimestamp="2025-12-05 10:43:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:43:15.280481452 +0000 UTC m=+941.568586965" watchObservedRunningTime="2025-12-05 10:43:15.288632257 +0000 UTC m=+941.576737770" Dec 05 10:43:15 crc kubenswrapper[4796]: W1205 10:43:15.292041 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb43aac5c_bb70_45de_99b4_6a992cf74202.slice/crio-b92499a199934d4d19ca5a4485c3edd22a9d51b7766bc1115692a7278ec8ec81 WatchSource:0}: Error finding container b92499a199934d4d19ca5a4485c3edd22a9d51b7766bc1115692a7278ec8ec81: Status 404 returned error can't find the container with id b92499a199934d4d19ca5a4485c3edd22a9d51b7766bc1115692a7278ec8ec81 Dec 05 10:43:15 crc kubenswrapper[4796]: I1205 10:43:15.788831 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-98wbd"] Dec 05 10:43:15 crc kubenswrapper[4796]: W1205 10:43:15.818865 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c470255_330c_47ec_91f7_a566792f753f.slice/crio-bd3f158d44758f5becb6e27124ff78149ebba07b54f510238a65d34def5bb47a WatchSource:0}: Error finding container bd3f158d44758f5becb6e27124ff78149ebba07b54f510238a65d34def5bb47a: Status 404 returned error can't find the container with id bd3f158d44758f5becb6e27124ff78149ebba07b54f510238a65d34def5bb47a Dec 05 10:43:16 crc kubenswrapper[4796]: I1205 10:43:16.274141 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b43aac5c-bb70-45de-99b4-6a992cf74202","Type":"ContainerStarted","Data":"b92499a199934d4d19ca5a4485c3edd22a9d51b7766bc1115692a7278ec8ec81"} Dec 05 10:43:16 crc kubenswrapper[4796]: I1205 10:43:16.277571 4796 generic.go:334] "Generic (PLEG): container finished" podID="bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5" containerID="bc416c3bc2623dc22b038d73b25eca6f691f88b0fac113ed670d2093d0ccb839" exitCode=0 Dec 05 10:43:16 crc kubenswrapper[4796]: I1205 10:43:16.277641 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" event={"ID":"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5","Type":"ContainerDied","Data":"bc416c3bc2623dc22b038d73b25eca6f691f88b0fac113ed670d2093d0ccb839"} Dec 05 10:43:16 crc kubenswrapper[4796]: I1205 10:43:16.279968 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-98wbd" event={"ID":"0c470255-330c-47ec-91f7-a566792f753f","Type":"ContainerStarted","Data":"1c0aee1babc34988ff6c54c0d82d2abec5769b501fb1feb95bc73a7646900007"} Dec 05 10:43:16 crc kubenswrapper[4796]: I1205 10:43:16.280000 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-98wbd" event={"ID":"0c470255-330c-47ec-91f7-a566792f753f","Type":"ContainerStarted","Data":"bd3f158d44758f5becb6e27124ff78149ebba07b54f510238a65d34def5bb47a"} Dec 05 10:43:16 crc kubenswrapper[4796]: I1205 10:43:16.318642 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-98wbd" podStartSLOduration=2.318627056 podStartE2EDuration="2.318627056s" podCreationTimestamp="2025-12-05 10:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:43:16.306480373 +0000 UTC m=+942.594585886" watchObservedRunningTime="2025-12-05 10:43:16.318627056 +0000 UTC m=+942.606732569" Dec 05 10:43:17 crc kubenswrapper[4796]: I1205 10:43:17.271748 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 10:43:17 crc kubenswrapper[4796]: I1205 10:43:17.279737 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 10:43:18 crc kubenswrapper[4796]: I1205 10:43:18.296241 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b43aac5c-bb70-45de-99b4-6a992cf74202","Type":"ContainerStarted","Data":"9fc9ced94089433dfd087388600c8c743fe4771d6e7d861046804a6d8813a457"} Dec 05 10:43:18 crc kubenswrapper[4796]: I1205 10:43:18.296345 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b43aac5c-bb70-45de-99b4-6a992cf74202" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://9fc9ced94089433dfd087388600c8c743fe4771d6e7d861046804a6d8813a457" gracePeriod=30 Dec 05 10:43:18 crc kubenswrapper[4796]: I1205 10:43:18.298403 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"69f827ad-08b4-4d78-869a-f1263b94266e","Type":"ContainerStarted","Data":"a647e4754e69c367aa9203f884243f57239b27a380a48196cc994bc6bab3c66a"} Dec 05 10:43:18 crc kubenswrapper[4796]: I1205 10:43:18.302888 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c2353dfd-80a8-438b-bc21-3cc6c4800d8f","Type":"ContainerStarted","Data":"45a66aeaeb514152335e169451952452545f5856ad41977c6906419ae0fa8796"} Dec 05 10:43:18 crc kubenswrapper[4796]: I1205 10:43:18.302923 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c2353dfd-80a8-438b-bc21-3cc6c4800d8f","Type":"ContainerStarted","Data":"b87ed26cd858632ba17b8109a15e4fa8855a592eefec547b95b57c0e91cb07ab"} Dec 05 10:43:18 crc kubenswrapper[4796]: I1205 10:43:18.309967 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" event={"ID":"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5","Type":"ContainerStarted","Data":"d7f7ffbd72266b83d0143293970d4fadcc6ba2e0db040e4938b78d81e9139508"} Dec 05 10:43:18 crc kubenswrapper[4796]: I1205 10:43:18.310035 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:18 crc kubenswrapper[4796]: I1205 10:43:18.318589 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.162465041 podStartE2EDuration="4.318575162s" podCreationTimestamp="2025-12-05 10:43:14 +0000 UTC" firstStartedPulling="2025-12-05 10:43:15.308872107 +0000 UTC m=+941.596977620" lastFinishedPulling="2025-12-05 10:43:17.464982228 +0000 UTC m=+943.753087741" observedRunningTime="2025-12-05 10:43:18.315742322 +0000 UTC m=+944.603847835" watchObservedRunningTime="2025-12-05 10:43:18.318575162 +0000 UTC m=+944.606680675" Dec 05 10:43:18 crc kubenswrapper[4796]: I1205 10:43:18.318970 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9bd708c2-2c8d-4cbc-98dd-36da5db1629b","Type":"ContainerStarted","Data":"7ba8e3f83f4d1a7c9d0c45aeeadeba78e08bb8b833e55fb655f1bdb2a8f524cb"} Dec 05 10:43:18 crc kubenswrapper[4796]: I1205 10:43:18.319000 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9bd708c2-2c8d-4cbc-98dd-36da5db1629b","Type":"ContainerStarted","Data":"e986cdfc3f5b6bd25c31885e88445cdc381236584a1819599816feedc6d05117"} Dec 05 10:43:18 crc kubenswrapper[4796]: I1205 10:43:18.319043 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9bd708c2-2c8d-4cbc-98dd-36da5db1629b" containerName="nova-metadata-metadata" containerID="cri-o://7ba8e3f83f4d1a7c9d0c45aeeadeba78e08bb8b833e55fb655f1bdb2a8f524cb" gracePeriod=30 Dec 05 10:43:18 crc kubenswrapper[4796]: I1205 10:43:18.319035 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9bd708c2-2c8d-4cbc-98dd-36da5db1629b" containerName="nova-metadata-log" containerID="cri-o://e986cdfc3f5b6bd25c31885e88445cdc381236584a1819599816feedc6d05117" gracePeriod=30 Dec 05 10:43:18 crc kubenswrapper[4796]: I1205 10:43:18.336254 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.923763964 podStartE2EDuration="4.336240087s" podCreationTimestamp="2025-12-05 10:43:14 +0000 UTC" firstStartedPulling="2025-12-05 10:43:15.050850902 +0000 UTC m=+941.338956414" lastFinishedPulling="2025-12-05 10:43:17.463327024 +0000 UTC m=+943.751432537" observedRunningTime="2025-12-05 10:43:18.335301832 +0000 UTC m=+944.623407345" watchObservedRunningTime="2025-12-05 10:43:18.336240087 +0000 UTC m=+944.624345599" Dec 05 10:43:18 crc kubenswrapper[4796]: I1205 10:43:18.362479 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" podStartSLOduration=4.362464106 podStartE2EDuration="4.362464106s" podCreationTimestamp="2025-12-05 10:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:43:18.359183784 +0000 UTC m=+944.647289297" watchObservedRunningTime="2025-12-05 10:43:18.362464106 +0000 UTC m=+944.650569618" Dec 05 10:43:18 crc kubenswrapper[4796]: I1205 10:43:18.381522 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.794197606 podStartE2EDuration="4.381506301s" podCreationTimestamp="2025-12-05 10:43:14 +0000 UTC" firstStartedPulling="2025-12-05 10:43:14.87624768 +0000 UTC m=+941.164353193" lastFinishedPulling="2025-12-05 10:43:17.463556375 +0000 UTC m=+943.751661888" observedRunningTime="2025-12-05 10:43:18.376946783 +0000 UTC m=+944.665052306" watchObservedRunningTime="2025-12-05 10:43:18.381506301 +0000 UTC m=+944.669611815" Dec 05 10:43:18 crc kubenswrapper[4796]: I1205 10:43:18.398715 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.875190718 podStartE2EDuration="4.398701913s" podCreationTimestamp="2025-12-05 10:43:14 +0000 UTC" firstStartedPulling="2025-12-05 10:43:14.939840785 +0000 UTC m=+941.227946298" lastFinishedPulling="2025-12-05 10:43:17.463351981 +0000 UTC m=+943.751457493" observedRunningTime="2025-12-05 10:43:18.397452784 +0000 UTC m=+944.685558297" watchObservedRunningTime="2025-12-05 10:43:18.398701913 +0000 UTC m=+944.686807427" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.282055 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.329948 4796 generic.go:334] "Generic (PLEG): container finished" podID="9bd708c2-2c8d-4cbc-98dd-36da5db1629b" containerID="7ba8e3f83f4d1a7c9d0c45aeeadeba78e08bb8b833e55fb655f1bdb2a8f524cb" exitCode=0 Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.329978 4796 generic.go:334] "Generic (PLEG): container finished" podID="9bd708c2-2c8d-4cbc-98dd-36da5db1629b" containerID="e986cdfc3f5b6bd25c31885e88445cdc381236584a1819599816feedc6d05117" exitCode=143 Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.330009 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9bd708c2-2c8d-4cbc-98dd-36da5db1629b","Type":"ContainerDied","Data":"7ba8e3f83f4d1a7c9d0c45aeeadeba78e08bb8b833e55fb655f1bdb2a8f524cb"} Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.330057 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9bd708c2-2c8d-4cbc-98dd-36da5db1629b","Type":"ContainerDied","Data":"e986cdfc3f5b6bd25c31885e88445cdc381236584a1819599816feedc6d05117"} Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.330065 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.330081 4796 scope.go:117] "RemoveContainer" containerID="7ba8e3f83f4d1a7c9d0c45aeeadeba78e08bb8b833e55fb655f1bdb2a8f524cb" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.330070 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9bd708c2-2c8d-4cbc-98dd-36da5db1629b","Type":"ContainerDied","Data":"3612ea1940a5b2e51f9e4e6ff6c2d2bfe734081da54d90a147c3e554cdacba0f"} Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.383545 4796 scope.go:117] "RemoveContainer" containerID="e986cdfc3f5b6bd25c31885e88445cdc381236584a1819599816feedc6d05117" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.400327 4796 scope.go:117] "RemoveContainer" containerID="7ba8e3f83f4d1a7c9d0c45aeeadeba78e08bb8b833e55fb655f1bdb2a8f524cb" Dec 05 10:43:19 crc kubenswrapper[4796]: E1205 10:43:19.400795 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ba8e3f83f4d1a7c9d0c45aeeadeba78e08bb8b833e55fb655f1bdb2a8f524cb\": container with ID starting with 7ba8e3f83f4d1a7c9d0c45aeeadeba78e08bb8b833e55fb655f1bdb2a8f524cb not found: ID does not exist" containerID="7ba8e3f83f4d1a7c9d0c45aeeadeba78e08bb8b833e55fb655f1bdb2a8f524cb" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.400835 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ba8e3f83f4d1a7c9d0c45aeeadeba78e08bb8b833e55fb655f1bdb2a8f524cb"} err="failed to get container status \"7ba8e3f83f4d1a7c9d0c45aeeadeba78e08bb8b833e55fb655f1bdb2a8f524cb\": rpc error: code = NotFound desc = could not find container \"7ba8e3f83f4d1a7c9d0c45aeeadeba78e08bb8b833e55fb655f1bdb2a8f524cb\": container with ID starting with 7ba8e3f83f4d1a7c9d0c45aeeadeba78e08bb8b833e55fb655f1bdb2a8f524cb not found: ID does not exist" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.400875 4796 scope.go:117] "RemoveContainer" containerID="e986cdfc3f5b6bd25c31885e88445cdc381236584a1819599816feedc6d05117" Dec 05 10:43:19 crc kubenswrapper[4796]: E1205 10:43:19.401170 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e986cdfc3f5b6bd25c31885e88445cdc381236584a1819599816feedc6d05117\": container with ID starting with e986cdfc3f5b6bd25c31885e88445cdc381236584a1819599816feedc6d05117 not found: ID does not exist" containerID="e986cdfc3f5b6bd25c31885e88445cdc381236584a1819599816feedc6d05117" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.401201 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e986cdfc3f5b6bd25c31885e88445cdc381236584a1819599816feedc6d05117"} err="failed to get container status \"e986cdfc3f5b6bd25c31885e88445cdc381236584a1819599816feedc6d05117\": rpc error: code = NotFound desc = could not find container \"e986cdfc3f5b6bd25c31885e88445cdc381236584a1819599816feedc6d05117\": container with ID starting with e986cdfc3f5b6bd25c31885e88445cdc381236584a1819599816feedc6d05117 not found: ID does not exist" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.401222 4796 scope.go:117] "RemoveContainer" containerID="7ba8e3f83f4d1a7c9d0c45aeeadeba78e08bb8b833e55fb655f1bdb2a8f524cb" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.401541 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ba8e3f83f4d1a7c9d0c45aeeadeba78e08bb8b833e55fb655f1bdb2a8f524cb"} err="failed to get container status \"7ba8e3f83f4d1a7c9d0c45aeeadeba78e08bb8b833e55fb655f1bdb2a8f524cb\": rpc error: code = NotFound desc = could not find container \"7ba8e3f83f4d1a7c9d0c45aeeadeba78e08bb8b833e55fb655f1bdb2a8f524cb\": container with ID starting with 7ba8e3f83f4d1a7c9d0c45aeeadeba78e08bb8b833e55fb655f1bdb2a8f524cb not found: ID does not exist" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.401577 4796 scope.go:117] "RemoveContainer" containerID="e986cdfc3f5b6bd25c31885e88445cdc381236584a1819599816feedc6d05117" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.401893 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e986cdfc3f5b6bd25c31885e88445cdc381236584a1819599816feedc6d05117"} err="failed to get container status \"e986cdfc3f5b6bd25c31885e88445cdc381236584a1819599816feedc6d05117\": rpc error: code = NotFound desc = could not find container \"e986cdfc3f5b6bd25c31885e88445cdc381236584a1819599816feedc6d05117\": container with ID starting with e986cdfc3f5b6bd25c31885e88445cdc381236584a1819599816feedc6d05117 not found: ID does not exist" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.459502 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwzlf\" (UniqueName: \"kubernetes.io/projected/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-kube-api-access-mwzlf\") pod \"9bd708c2-2c8d-4cbc-98dd-36da5db1629b\" (UID: \"9bd708c2-2c8d-4cbc-98dd-36da5db1629b\") " Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.459584 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-config-data\") pod \"9bd708c2-2c8d-4cbc-98dd-36da5db1629b\" (UID: \"9bd708c2-2c8d-4cbc-98dd-36da5db1629b\") " Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.459650 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-combined-ca-bundle\") pod \"9bd708c2-2c8d-4cbc-98dd-36da5db1629b\" (UID: \"9bd708c2-2c8d-4cbc-98dd-36da5db1629b\") " Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.459867 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-logs\") pod \"9bd708c2-2c8d-4cbc-98dd-36da5db1629b\" (UID: \"9bd708c2-2c8d-4cbc-98dd-36da5db1629b\") " Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.460645 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-logs" (OuterVolumeSpecName: "logs") pod "9bd708c2-2c8d-4cbc-98dd-36da5db1629b" (UID: "9bd708c2-2c8d-4cbc-98dd-36da5db1629b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.475101 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-kube-api-access-mwzlf" (OuterVolumeSpecName: "kube-api-access-mwzlf") pod "9bd708c2-2c8d-4cbc-98dd-36da5db1629b" (UID: "9bd708c2-2c8d-4cbc-98dd-36da5db1629b"). InnerVolumeSpecName "kube-api-access-mwzlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.485193 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-config-data" (OuterVolumeSpecName: "config-data") pod "9bd708c2-2c8d-4cbc-98dd-36da5db1629b" (UID: "9bd708c2-2c8d-4cbc-98dd-36da5db1629b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.487172 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bd708c2-2c8d-4cbc-98dd-36da5db1629b" (UID: "9bd708c2-2c8d-4cbc-98dd-36da5db1629b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.563257 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-logs\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.563289 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwzlf\" (UniqueName: \"kubernetes.io/projected/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-kube-api-access-mwzlf\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.563303 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.563315 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bd708c2-2c8d-4cbc-98dd-36da5db1629b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.568743 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.663167 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.670845 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.679715 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 05 10:43:19 crc kubenswrapper[4796]: E1205 10:43:19.680153 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd708c2-2c8d-4cbc-98dd-36da5db1629b" containerName="nova-metadata-metadata" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.680235 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd708c2-2c8d-4cbc-98dd-36da5db1629b" containerName="nova-metadata-metadata" Dec 05 10:43:19 crc kubenswrapper[4796]: E1205 10:43:19.680311 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd708c2-2c8d-4cbc-98dd-36da5db1629b" containerName="nova-metadata-log" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.680370 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd708c2-2c8d-4cbc-98dd-36da5db1629b" containerName="nova-metadata-log" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.680586 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bd708c2-2c8d-4cbc-98dd-36da5db1629b" containerName="nova-metadata-metadata" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.680660 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bd708c2-2c8d-4cbc-98dd-36da5db1629b" containerName="nova-metadata-log" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.681633 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.684396 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.684634 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.695287 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.744015 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.869190 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-config-data\") pod \"nova-metadata-0\" (UID: \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\") " pod="openstack/nova-metadata-0" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.869229 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbnl6\" (UniqueName: \"kubernetes.io/projected/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-kube-api-access-zbnl6\") pod \"nova-metadata-0\" (UID: \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\") " pod="openstack/nova-metadata-0" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.869253 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\") " pod="openstack/nova-metadata-0" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.869316 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-logs\") pod \"nova-metadata-0\" (UID: \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\") " pod="openstack/nova-metadata-0" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.869505 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\") " pod="openstack/nova-metadata-0" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.970805 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\") " pod="openstack/nova-metadata-0" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.970863 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbnl6\" (UniqueName: \"kubernetes.io/projected/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-kube-api-access-zbnl6\") pod \"nova-metadata-0\" (UID: \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\") " pod="openstack/nova-metadata-0" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.970884 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-config-data\") pod \"nova-metadata-0\" (UID: \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\") " pod="openstack/nova-metadata-0" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.970897 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\") " pod="openstack/nova-metadata-0" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.970922 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-logs\") pod \"nova-metadata-0\" (UID: \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\") " pod="openstack/nova-metadata-0" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.971336 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-logs\") pod \"nova-metadata-0\" (UID: \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\") " pod="openstack/nova-metadata-0" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.977023 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\") " pod="openstack/nova-metadata-0" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.977071 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\") " pod="openstack/nova-metadata-0" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.977152 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-config-data\") pod \"nova-metadata-0\" (UID: \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\") " pod="openstack/nova-metadata-0" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.983857 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbnl6\" (UniqueName: \"kubernetes.io/projected/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-kube-api-access-zbnl6\") pod \"nova-metadata-0\" (UID: \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\") " pod="openstack/nova-metadata-0" Dec 05 10:43:19 crc kubenswrapper[4796]: I1205 10:43:19.995164 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 10:43:20 crc kubenswrapper[4796]: I1205 10:43:20.040852 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bd708c2-2c8d-4cbc-98dd-36da5db1629b" path="/var/lib/kubelet/pods/9bd708c2-2c8d-4cbc-98dd-36da5db1629b/volumes" Dec 05 10:43:20 crc kubenswrapper[4796]: I1205 10:43:20.344822 4796 generic.go:334] "Generic (PLEG): container finished" podID="0c470255-330c-47ec-91f7-a566792f753f" containerID="1c0aee1babc34988ff6c54c0d82d2abec5769b501fb1feb95bc73a7646900007" exitCode=0 Dec 05 10:43:20 crc kubenswrapper[4796]: I1205 10:43:20.344994 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-98wbd" event={"ID":"0c470255-330c-47ec-91f7-a566792f753f","Type":"ContainerDied","Data":"1c0aee1babc34988ff6c54c0d82d2abec5769b501fb1feb95bc73a7646900007"} Dec 05 10:43:20 crc kubenswrapper[4796]: I1205 10:43:20.405481 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 10:43:20 crc kubenswrapper[4796]: W1205 10:43:20.415669 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51ee3167_a06c_4dc0_85d8_0b87c7bbd67c.slice/crio-a33228bfdf6b7d29c1cb72f0e7e7e46c98ace0982a7baf7b1bf04c33c07f7d55 WatchSource:0}: Error finding container a33228bfdf6b7d29c1cb72f0e7e7e46c98ace0982a7baf7b1bf04c33c07f7d55: Status 404 returned error can't find the container with id a33228bfdf6b7d29c1cb72f0e7e7e46c98ace0982a7baf7b1bf04c33c07f7d55 Dec 05 10:43:21 crc kubenswrapper[4796]: I1205 10:43:21.357230 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c","Type":"ContainerStarted","Data":"7193090fd04ecd8bf078fd8fa0b73bba65ebad64cdf9b3999e34371460e57c84"} Dec 05 10:43:21 crc kubenswrapper[4796]: I1205 10:43:21.357944 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c","Type":"ContainerStarted","Data":"44c5d8eaa12ee1b2e0edf414ad766cd4d58fdbe681b345502dd472ab55bd024f"} Dec 05 10:43:21 crc kubenswrapper[4796]: I1205 10:43:21.357956 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c","Type":"ContainerStarted","Data":"a33228bfdf6b7d29c1cb72f0e7e7e46c98ace0982a7baf7b1bf04c33c07f7d55"} Dec 05 10:43:21 crc kubenswrapper[4796]: I1205 10:43:21.360230 4796 generic.go:334] "Generic (PLEG): container finished" podID="912d5bbd-238c-49b9-ace1-e201dead6822" containerID="7cb0bdaa0e5a1d7891eadf4ac6b264c3d5e9f6d5a45484bb72fdb0b16552c8fd" exitCode=0 Dec 05 10:43:21 crc kubenswrapper[4796]: I1205 10:43:21.360285 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rkwvn" event={"ID":"912d5bbd-238c-49b9-ace1-e201dead6822","Type":"ContainerDied","Data":"7cb0bdaa0e5a1d7891eadf4ac6b264c3d5e9f6d5a45484bb72fdb0b16552c8fd"} Dec 05 10:43:21 crc kubenswrapper[4796]: I1205 10:43:21.379106 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.379086818 podStartE2EDuration="2.379086818s" podCreationTimestamp="2025-12-05 10:43:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:43:21.370578542 +0000 UTC m=+947.658684054" watchObservedRunningTime="2025-12-05 10:43:21.379086818 +0000 UTC m=+947.667192332" Dec 05 10:43:21 crc kubenswrapper[4796]: I1205 10:43:21.665514 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-98wbd" Dec 05 10:43:21 crc kubenswrapper[4796]: I1205 10:43:21.810041 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c470255-330c-47ec-91f7-a566792f753f-combined-ca-bundle\") pod \"0c470255-330c-47ec-91f7-a566792f753f\" (UID: \"0c470255-330c-47ec-91f7-a566792f753f\") " Dec 05 10:43:21 crc kubenswrapper[4796]: I1205 10:43:21.810098 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c470255-330c-47ec-91f7-a566792f753f-config-data\") pod \"0c470255-330c-47ec-91f7-a566792f753f\" (UID: \"0c470255-330c-47ec-91f7-a566792f753f\") " Dec 05 10:43:21 crc kubenswrapper[4796]: I1205 10:43:21.810142 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c470255-330c-47ec-91f7-a566792f753f-scripts\") pod \"0c470255-330c-47ec-91f7-a566792f753f\" (UID: \"0c470255-330c-47ec-91f7-a566792f753f\") " Dec 05 10:43:21 crc kubenswrapper[4796]: I1205 10:43:21.810177 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnt89\" (UniqueName: \"kubernetes.io/projected/0c470255-330c-47ec-91f7-a566792f753f-kube-api-access-cnt89\") pod \"0c470255-330c-47ec-91f7-a566792f753f\" (UID: \"0c470255-330c-47ec-91f7-a566792f753f\") " Dec 05 10:43:21 crc kubenswrapper[4796]: I1205 10:43:21.816179 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c470255-330c-47ec-91f7-a566792f753f-scripts" (OuterVolumeSpecName: "scripts") pod "0c470255-330c-47ec-91f7-a566792f753f" (UID: "0c470255-330c-47ec-91f7-a566792f753f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:21 crc kubenswrapper[4796]: I1205 10:43:21.818077 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c470255-330c-47ec-91f7-a566792f753f-kube-api-access-cnt89" (OuterVolumeSpecName: "kube-api-access-cnt89") pod "0c470255-330c-47ec-91f7-a566792f753f" (UID: "0c470255-330c-47ec-91f7-a566792f753f"). InnerVolumeSpecName "kube-api-access-cnt89". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:43:21 crc kubenswrapper[4796]: I1205 10:43:21.834660 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c470255-330c-47ec-91f7-a566792f753f-config-data" (OuterVolumeSpecName: "config-data") pod "0c470255-330c-47ec-91f7-a566792f753f" (UID: "0c470255-330c-47ec-91f7-a566792f753f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:21 crc kubenswrapper[4796]: I1205 10:43:21.835112 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c470255-330c-47ec-91f7-a566792f753f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c470255-330c-47ec-91f7-a566792f753f" (UID: "0c470255-330c-47ec-91f7-a566792f753f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:21 crc kubenswrapper[4796]: I1205 10:43:21.912588 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnt89\" (UniqueName: \"kubernetes.io/projected/0c470255-330c-47ec-91f7-a566792f753f-kube-api-access-cnt89\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:21 crc kubenswrapper[4796]: I1205 10:43:21.912620 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c470255-330c-47ec-91f7-a566792f753f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:21 crc kubenswrapper[4796]: I1205 10:43:21.912629 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c470255-330c-47ec-91f7-a566792f753f-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:21 crc kubenswrapper[4796]: I1205 10:43:21.912641 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c470255-330c-47ec-91f7-a566792f753f-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.368407 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-98wbd" Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.369234 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-98wbd" event={"ID":"0c470255-330c-47ec-91f7-a566792f753f","Type":"ContainerDied","Data":"bd3f158d44758f5becb6e27124ff78149ebba07b54f510238a65d34def5bb47a"} Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.369257 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd3f158d44758f5becb6e27124ff78149ebba07b54f510238a65d34def5bb47a" Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.462880 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 10:43:22 crc kubenswrapper[4796]: E1205 10:43:22.463294 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c470255-330c-47ec-91f7-a566792f753f" containerName="nova-cell1-conductor-db-sync" Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.463314 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c470255-330c-47ec-91f7-a566792f753f" containerName="nova-cell1-conductor-db-sync" Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.463565 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c470255-330c-47ec-91f7-a566792f753f" containerName="nova-cell1-conductor-db-sync" Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.464177 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.466098 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.505329 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.629585 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70590d4b-3f25-4359-8a2d-984d9d98a9ed-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"70590d4b-3f25-4359-8a2d-984d9d98a9ed\") " pod="openstack/nova-cell1-conductor-0" Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.630036 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dkrn\" (UniqueName: \"kubernetes.io/projected/70590d4b-3f25-4359-8a2d-984d9d98a9ed-kube-api-access-9dkrn\") pod \"nova-cell1-conductor-0\" (UID: \"70590d4b-3f25-4359-8a2d-984d9d98a9ed\") " pod="openstack/nova-cell1-conductor-0" Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.630128 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70590d4b-3f25-4359-8a2d-984d9d98a9ed-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"70590d4b-3f25-4359-8a2d-984d9d98a9ed\") " pod="openstack/nova-cell1-conductor-0" Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.731254 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70590d4b-3f25-4359-8a2d-984d9d98a9ed-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"70590d4b-3f25-4359-8a2d-984d9d98a9ed\") " pod="openstack/nova-cell1-conductor-0" Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.731405 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dkrn\" (UniqueName: \"kubernetes.io/projected/70590d4b-3f25-4359-8a2d-984d9d98a9ed-kube-api-access-9dkrn\") pod \"nova-cell1-conductor-0\" (UID: \"70590d4b-3f25-4359-8a2d-984d9d98a9ed\") " pod="openstack/nova-cell1-conductor-0" Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.731496 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70590d4b-3f25-4359-8a2d-984d9d98a9ed-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"70590d4b-3f25-4359-8a2d-984d9d98a9ed\") " pod="openstack/nova-cell1-conductor-0" Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.735819 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70590d4b-3f25-4359-8a2d-984d9d98a9ed-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"70590d4b-3f25-4359-8a2d-984d9d98a9ed\") " pod="openstack/nova-cell1-conductor-0" Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.736489 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70590d4b-3f25-4359-8a2d-984d9d98a9ed-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"70590d4b-3f25-4359-8a2d-984d9d98a9ed\") " pod="openstack/nova-cell1-conductor-0" Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.747512 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dkrn\" (UniqueName: \"kubernetes.io/projected/70590d4b-3f25-4359-8a2d-984d9d98a9ed-kube-api-access-9dkrn\") pod \"nova-cell1-conductor-0\" (UID: \"70590d4b-3f25-4359-8a2d-984d9d98a9ed\") " pod="openstack/nova-cell1-conductor-0" Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.797578 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.810367 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rkwvn" Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.935630 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/912d5bbd-238c-49b9-ace1-e201dead6822-combined-ca-bundle\") pod \"912d5bbd-238c-49b9-ace1-e201dead6822\" (UID: \"912d5bbd-238c-49b9-ace1-e201dead6822\") " Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.936177 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/912d5bbd-238c-49b9-ace1-e201dead6822-scripts\") pod \"912d5bbd-238c-49b9-ace1-e201dead6822\" (UID: \"912d5bbd-238c-49b9-ace1-e201dead6822\") " Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.936216 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62g6r\" (UniqueName: \"kubernetes.io/projected/912d5bbd-238c-49b9-ace1-e201dead6822-kube-api-access-62g6r\") pod \"912d5bbd-238c-49b9-ace1-e201dead6822\" (UID: \"912d5bbd-238c-49b9-ace1-e201dead6822\") " Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.936249 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/912d5bbd-238c-49b9-ace1-e201dead6822-config-data\") pod \"912d5bbd-238c-49b9-ace1-e201dead6822\" (UID: \"912d5bbd-238c-49b9-ace1-e201dead6822\") " Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.941924 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/912d5bbd-238c-49b9-ace1-e201dead6822-kube-api-access-62g6r" (OuterVolumeSpecName: "kube-api-access-62g6r") pod "912d5bbd-238c-49b9-ace1-e201dead6822" (UID: "912d5bbd-238c-49b9-ace1-e201dead6822"). InnerVolumeSpecName "kube-api-access-62g6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.942009 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/912d5bbd-238c-49b9-ace1-e201dead6822-scripts" (OuterVolumeSpecName: "scripts") pod "912d5bbd-238c-49b9-ace1-e201dead6822" (UID: "912d5bbd-238c-49b9-ace1-e201dead6822"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.960944 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/912d5bbd-238c-49b9-ace1-e201dead6822-config-data" (OuterVolumeSpecName: "config-data") pod "912d5bbd-238c-49b9-ace1-e201dead6822" (UID: "912d5bbd-238c-49b9-ace1-e201dead6822"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:22 crc kubenswrapper[4796]: I1205 10:43:22.963429 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/912d5bbd-238c-49b9-ace1-e201dead6822-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "912d5bbd-238c-49b9-ace1-e201dead6822" (UID: "912d5bbd-238c-49b9-ace1-e201dead6822"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:23 crc kubenswrapper[4796]: I1205 10:43:23.039284 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/912d5bbd-238c-49b9-ace1-e201dead6822-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:23 crc kubenswrapper[4796]: I1205 10:43:23.039311 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/912d5bbd-238c-49b9-ace1-e201dead6822-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:23 crc kubenswrapper[4796]: I1205 10:43:23.039333 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62g6r\" (UniqueName: \"kubernetes.io/projected/912d5bbd-238c-49b9-ace1-e201dead6822-kube-api-access-62g6r\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:23 crc kubenswrapper[4796]: I1205 10:43:23.039345 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/912d5bbd-238c-49b9-ace1-e201dead6822-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:23 crc kubenswrapper[4796]: I1205 10:43:23.182304 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 05 10:43:23 crc kubenswrapper[4796]: W1205 10:43:23.184532 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70590d4b_3f25_4359_8a2d_984d9d98a9ed.slice/crio-7cbef81985b2c73f1261c2986de78aaad5a5f44407c3f1f9996d525b00187b42 WatchSource:0}: Error finding container 7cbef81985b2c73f1261c2986de78aaad5a5f44407c3f1f9996d525b00187b42: Status 404 returned error can't find the container with id 7cbef81985b2c73f1261c2986de78aaad5a5f44407c3f1f9996d525b00187b42 Dec 05 10:43:23 crc kubenswrapper[4796]: I1205 10:43:23.377555 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rkwvn" Dec 05 10:43:23 crc kubenswrapper[4796]: I1205 10:43:23.377553 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rkwvn" event={"ID":"912d5bbd-238c-49b9-ace1-e201dead6822","Type":"ContainerDied","Data":"2e30dc6f700a23f2011caf48063be2fce75647549f431f21aa888ea7a5758444"} Dec 05 10:43:23 crc kubenswrapper[4796]: I1205 10:43:23.377648 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e30dc6f700a23f2011caf48063be2fce75647549f431f21aa888ea7a5758444" Dec 05 10:43:23 crc kubenswrapper[4796]: I1205 10:43:23.379132 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"70590d4b-3f25-4359-8a2d-984d9d98a9ed","Type":"ContainerStarted","Data":"f11f43503c8a7cdddd41f0eb73bd8ad84d322e4d65674ed56d289a331e08a828"} Dec 05 10:43:23 crc kubenswrapper[4796]: I1205 10:43:23.379156 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"70590d4b-3f25-4359-8a2d-984d9d98a9ed","Type":"ContainerStarted","Data":"7cbef81985b2c73f1261c2986de78aaad5a5f44407c3f1f9996d525b00187b42"} Dec 05 10:43:23 crc kubenswrapper[4796]: I1205 10:43:23.379263 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 05 10:43:23 crc kubenswrapper[4796]: I1205 10:43:23.390152 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=1.390142442 podStartE2EDuration="1.390142442s" podCreationTimestamp="2025-12-05 10:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:43:23.390036834 +0000 UTC m=+949.678142347" watchObservedRunningTime="2025-12-05 10:43:23.390142442 +0000 UTC m=+949.678247955" Dec 05 10:43:23 crc kubenswrapper[4796]: I1205 10:43:23.571738 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 10:43:23 crc kubenswrapper[4796]: I1205 10:43:23.572211 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c2353dfd-80a8-438b-bc21-3cc6c4800d8f" containerName="nova-api-log" containerID="cri-o://b87ed26cd858632ba17b8109a15e4fa8855a592eefec547b95b57c0e91cb07ab" gracePeriod=30 Dec 05 10:43:23 crc kubenswrapper[4796]: I1205 10:43:23.572258 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c2353dfd-80a8-438b-bc21-3cc6c4800d8f" containerName="nova-api-api" containerID="cri-o://45a66aeaeb514152335e169451952452545f5856ad41977c6906419ae0fa8796" gracePeriod=30 Dec 05 10:43:23 crc kubenswrapper[4796]: I1205 10:43:23.586408 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 10:43:23 crc kubenswrapper[4796]: I1205 10:43:23.586602 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="69f827ad-08b4-4d78-869a-f1263b94266e" containerName="nova-scheduler-scheduler" containerID="cri-o://a647e4754e69c367aa9203f884243f57239b27a380a48196cc994bc6bab3c66a" gracePeriod=30 Dec 05 10:43:23 crc kubenswrapper[4796]: I1205 10:43:23.593452 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 10:43:23 crc kubenswrapper[4796]: I1205 10:43:23.593606 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="51ee3167-a06c-4dc0-85d8-0b87c7bbd67c" containerName="nova-metadata-log" containerID="cri-o://44c5d8eaa12ee1b2e0edf414ad766cd4d58fdbe681b345502dd472ab55bd024f" gracePeriod=30 Dec 05 10:43:23 crc kubenswrapper[4796]: I1205 10:43:23.593824 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="51ee3167-a06c-4dc0-85d8-0b87c7bbd67c" containerName="nova-metadata-metadata" containerID="cri-o://7193090fd04ecd8bf078fd8fa0b73bba65ebad64cdf9b3999e34371460e57c84" gracePeriod=30 Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.130365 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.266257 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-combined-ca-bundle\") pod \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\" (UID: \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\") " Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.266717 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-nova-metadata-tls-certs\") pod \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\" (UID: \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\") " Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.266840 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbnl6\" (UniqueName: \"kubernetes.io/projected/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-kube-api-access-zbnl6\") pod \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\" (UID: \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\") " Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.266949 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-logs\") pod \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\" (UID: \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\") " Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.267047 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-config-data\") pod \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\" (UID: \"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c\") " Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.267480 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-logs" (OuterVolumeSpecName: "logs") pod "51ee3167-a06c-4dc0-85d8-0b87c7bbd67c" (UID: "51ee3167-a06c-4dc0-85d8-0b87c7bbd67c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.267802 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-logs\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.282816 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-kube-api-access-zbnl6" (OuterVolumeSpecName: "kube-api-access-zbnl6") pod "51ee3167-a06c-4dc0-85d8-0b87c7bbd67c" (UID: "51ee3167-a06c-4dc0-85d8-0b87c7bbd67c"). InnerVolumeSpecName "kube-api-access-zbnl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.292308 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51ee3167-a06c-4dc0-85d8-0b87c7bbd67c" (UID: "51ee3167-a06c-4dc0-85d8-0b87c7bbd67c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.297315 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-config-data" (OuterVolumeSpecName: "config-data") pod "51ee3167-a06c-4dc0-85d8-0b87c7bbd67c" (UID: "51ee3167-a06c-4dc0-85d8-0b87c7bbd67c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.308826 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "51ee3167-a06c-4dc0-85d8-0b87c7bbd67c" (UID: "51ee3167-a06c-4dc0-85d8-0b87c7bbd67c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.369155 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.369185 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.369199 4796 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.369209 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbnl6\" (UniqueName: \"kubernetes.io/projected/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c-kube-api-access-zbnl6\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.389537 4796 generic.go:334] "Generic (PLEG): container finished" podID="51ee3167-a06c-4dc0-85d8-0b87c7bbd67c" containerID="7193090fd04ecd8bf078fd8fa0b73bba65ebad64cdf9b3999e34371460e57c84" exitCode=0 Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.389566 4796 generic.go:334] "Generic (PLEG): container finished" podID="51ee3167-a06c-4dc0-85d8-0b87c7bbd67c" containerID="44c5d8eaa12ee1b2e0edf414ad766cd4d58fdbe681b345502dd472ab55bd024f" exitCode=143 Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.389566 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c","Type":"ContainerDied","Data":"7193090fd04ecd8bf078fd8fa0b73bba65ebad64cdf9b3999e34371460e57c84"} Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.389602 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c","Type":"ContainerDied","Data":"44c5d8eaa12ee1b2e0edf414ad766cd4d58fdbe681b345502dd472ab55bd024f"} Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.389608 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.389627 4796 scope.go:117] "RemoveContainer" containerID="7193090fd04ecd8bf078fd8fa0b73bba65ebad64cdf9b3999e34371460e57c84" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.389614 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"51ee3167-a06c-4dc0-85d8-0b87c7bbd67c","Type":"ContainerDied","Data":"a33228bfdf6b7d29c1cb72f0e7e7e46c98ace0982a7baf7b1bf04c33c07f7d55"} Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.391468 4796 generic.go:334] "Generic (PLEG): container finished" podID="c2353dfd-80a8-438b-bc21-3cc6c4800d8f" containerID="45a66aeaeb514152335e169451952452545f5856ad41977c6906419ae0fa8796" exitCode=0 Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.391488 4796 generic.go:334] "Generic (PLEG): container finished" podID="c2353dfd-80a8-438b-bc21-3cc6c4800d8f" containerID="b87ed26cd858632ba17b8109a15e4fa8855a592eefec547b95b57c0e91cb07ab" exitCode=143 Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.392416 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c2353dfd-80a8-438b-bc21-3cc6c4800d8f","Type":"ContainerDied","Data":"45a66aeaeb514152335e169451952452545f5856ad41977c6906419ae0fa8796"} Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.392440 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c2353dfd-80a8-438b-bc21-3cc6c4800d8f","Type":"ContainerDied","Data":"b87ed26cd858632ba17b8109a15e4fa8855a592eefec547b95b57c0e91cb07ab"} Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.407394 4796 scope.go:117] "RemoveContainer" containerID="44c5d8eaa12ee1b2e0edf414ad766cd4d58fdbe681b345502dd472ab55bd024f" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.461862 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.469289 4796 scope.go:117] "RemoveContainer" containerID="7193090fd04ecd8bf078fd8fa0b73bba65ebad64cdf9b3999e34371460e57c84" Dec 05 10:43:24 crc kubenswrapper[4796]: E1205 10:43:24.473023 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7193090fd04ecd8bf078fd8fa0b73bba65ebad64cdf9b3999e34371460e57c84\": container with ID starting with 7193090fd04ecd8bf078fd8fa0b73bba65ebad64cdf9b3999e34371460e57c84 not found: ID does not exist" containerID="7193090fd04ecd8bf078fd8fa0b73bba65ebad64cdf9b3999e34371460e57c84" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.473062 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7193090fd04ecd8bf078fd8fa0b73bba65ebad64cdf9b3999e34371460e57c84"} err="failed to get container status \"7193090fd04ecd8bf078fd8fa0b73bba65ebad64cdf9b3999e34371460e57c84\": rpc error: code = NotFound desc = could not find container \"7193090fd04ecd8bf078fd8fa0b73bba65ebad64cdf9b3999e34371460e57c84\": container with ID starting with 7193090fd04ecd8bf078fd8fa0b73bba65ebad64cdf9b3999e34371460e57c84 not found: ID does not exist" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.473085 4796 scope.go:117] "RemoveContainer" containerID="44c5d8eaa12ee1b2e0edf414ad766cd4d58fdbe681b345502dd472ab55bd024f" Dec 05 10:43:24 crc kubenswrapper[4796]: E1205 10:43:24.474362 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44c5d8eaa12ee1b2e0edf414ad766cd4d58fdbe681b345502dd472ab55bd024f\": container with ID starting with 44c5d8eaa12ee1b2e0edf414ad766cd4d58fdbe681b345502dd472ab55bd024f not found: ID does not exist" containerID="44c5d8eaa12ee1b2e0edf414ad766cd4d58fdbe681b345502dd472ab55bd024f" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.474402 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44c5d8eaa12ee1b2e0edf414ad766cd4d58fdbe681b345502dd472ab55bd024f"} err="failed to get container status \"44c5d8eaa12ee1b2e0edf414ad766cd4d58fdbe681b345502dd472ab55bd024f\": rpc error: code = NotFound desc = could not find container \"44c5d8eaa12ee1b2e0edf414ad766cd4d58fdbe681b345502dd472ab55bd024f\": container with ID starting with 44c5d8eaa12ee1b2e0edf414ad766cd4d58fdbe681b345502dd472ab55bd024f not found: ID does not exist" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.474427 4796 scope.go:117] "RemoveContainer" containerID="7193090fd04ecd8bf078fd8fa0b73bba65ebad64cdf9b3999e34371460e57c84" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.477555 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7193090fd04ecd8bf078fd8fa0b73bba65ebad64cdf9b3999e34371460e57c84"} err="failed to get container status \"7193090fd04ecd8bf078fd8fa0b73bba65ebad64cdf9b3999e34371460e57c84\": rpc error: code = NotFound desc = could not find container \"7193090fd04ecd8bf078fd8fa0b73bba65ebad64cdf9b3999e34371460e57c84\": container with ID starting with 7193090fd04ecd8bf078fd8fa0b73bba65ebad64cdf9b3999e34371460e57c84 not found: ID does not exist" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.477580 4796 scope.go:117] "RemoveContainer" containerID="44c5d8eaa12ee1b2e0edf414ad766cd4d58fdbe681b345502dd472ab55bd024f" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.479102 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44c5d8eaa12ee1b2e0edf414ad766cd4d58fdbe681b345502dd472ab55bd024f"} err="failed to get container status \"44c5d8eaa12ee1b2e0edf414ad766cd4d58fdbe681b345502dd472ab55bd024f\": rpc error: code = NotFound desc = could not find container \"44c5d8eaa12ee1b2e0edf414ad766cd4d58fdbe681b345502dd472ab55bd024f\": container with ID starting with 44c5d8eaa12ee1b2e0edf414ad766cd4d58fdbe681b345502dd472ab55bd024f not found: ID does not exist" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.483170 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.489785 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 05 10:43:24 crc kubenswrapper[4796]: E1205 10:43:24.490147 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ee3167-a06c-4dc0-85d8-0b87c7bbd67c" containerName="nova-metadata-log" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.490165 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ee3167-a06c-4dc0-85d8-0b87c7bbd67c" containerName="nova-metadata-log" Dec 05 10:43:24 crc kubenswrapper[4796]: E1205 10:43:24.490212 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ee3167-a06c-4dc0-85d8-0b87c7bbd67c" containerName="nova-metadata-metadata" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.490218 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ee3167-a06c-4dc0-85d8-0b87c7bbd67c" containerName="nova-metadata-metadata" Dec 05 10:43:24 crc kubenswrapper[4796]: E1205 10:43:24.490228 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="912d5bbd-238c-49b9-ace1-e201dead6822" containerName="nova-manage" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.490234 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="912d5bbd-238c-49b9-ace1-e201dead6822" containerName="nova-manage" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.490419 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="912d5bbd-238c-49b9-ace1-e201dead6822" containerName="nova-manage" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.490433 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ee3167-a06c-4dc0-85d8-0b87c7bbd67c" containerName="nova-metadata-metadata" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.490447 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ee3167-a06c-4dc0-85d8-0b87c7bbd67c" containerName="nova-metadata-log" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.491358 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.492905 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.492980 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.496268 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.524095 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.579627 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shj4g\" (UniqueName: \"kubernetes.io/projected/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-kube-api-access-shj4g\") pod \"c2353dfd-80a8-438b-bc21-3cc6c4800d8f\" (UID: \"c2353dfd-80a8-438b-bc21-3cc6c4800d8f\") " Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.579802 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-config-data\") pod \"c2353dfd-80a8-438b-bc21-3cc6c4800d8f\" (UID: \"c2353dfd-80a8-438b-bc21-3cc6c4800d8f\") " Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.579977 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-combined-ca-bundle\") pod \"c2353dfd-80a8-438b-bc21-3cc6c4800d8f\" (UID: \"c2353dfd-80a8-438b-bc21-3cc6c4800d8f\") " Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.580082 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-logs\") pod \"c2353dfd-80a8-438b-bc21-3cc6c4800d8f\" (UID: \"c2353dfd-80a8-438b-bc21-3cc6c4800d8f\") " Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.580422 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-logs" (OuterVolumeSpecName: "logs") pod "c2353dfd-80a8-438b-bc21-3cc6c4800d8f" (UID: "c2353dfd-80a8-438b-bc21-3cc6c4800d8f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.580591 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cecbf859-aee4-42fc-9ec0-6e552f405df6-logs\") pod \"nova-metadata-0\" (UID: \"cecbf859-aee4-42fc-9ec0-6e552f405df6\") " pod="openstack/nova-metadata-0" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.580663 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cecbf859-aee4-42fc-9ec0-6e552f405df6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cecbf859-aee4-42fc-9ec0-6e552f405df6\") " pod="openstack/nova-metadata-0" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.580786 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cecbf859-aee4-42fc-9ec0-6e552f405df6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cecbf859-aee4-42fc-9ec0-6e552f405df6\") " pod="openstack/nova-metadata-0" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.580901 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cecbf859-aee4-42fc-9ec0-6e552f405df6-config-data\") pod \"nova-metadata-0\" (UID: \"cecbf859-aee4-42fc-9ec0-6e552f405df6\") " pod="openstack/nova-metadata-0" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.581057 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkzfz\" (UniqueName: \"kubernetes.io/projected/cecbf859-aee4-42fc-9ec0-6e552f405df6-kube-api-access-tkzfz\") pod \"nova-metadata-0\" (UID: \"cecbf859-aee4-42fc-9ec0-6e552f405df6\") " pod="openstack/nova-metadata-0" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.581311 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-logs\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.582131 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-kube-api-access-shj4g" (OuterVolumeSpecName: "kube-api-access-shj4g") pod "c2353dfd-80a8-438b-bc21-3cc6c4800d8f" (UID: "c2353dfd-80a8-438b-bc21-3cc6c4800d8f"). InnerVolumeSpecName "kube-api-access-shj4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.599906 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2353dfd-80a8-438b-bc21-3cc6c4800d8f" (UID: "c2353dfd-80a8-438b-bc21-3cc6c4800d8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.600286 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-config-data" (OuterVolumeSpecName: "config-data") pod "c2353dfd-80a8-438b-bc21-3cc6c4800d8f" (UID: "c2353dfd-80a8-438b-bc21-3cc6c4800d8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.683272 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cecbf859-aee4-42fc-9ec0-6e552f405df6-logs\") pod \"nova-metadata-0\" (UID: \"cecbf859-aee4-42fc-9ec0-6e552f405df6\") " pod="openstack/nova-metadata-0" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.683311 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cecbf859-aee4-42fc-9ec0-6e552f405df6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cecbf859-aee4-42fc-9ec0-6e552f405df6\") " pod="openstack/nova-metadata-0" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.683339 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cecbf859-aee4-42fc-9ec0-6e552f405df6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cecbf859-aee4-42fc-9ec0-6e552f405df6\") " pod="openstack/nova-metadata-0" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.683367 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cecbf859-aee4-42fc-9ec0-6e552f405df6-config-data\") pod \"nova-metadata-0\" (UID: \"cecbf859-aee4-42fc-9ec0-6e552f405df6\") " pod="openstack/nova-metadata-0" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.683423 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkzfz\" (UniqueName: \"kubernetes.io/projected/cecbf859-aee4-42fc-9ec0-6e552f405df6-kube-api-access-tkzfz\") pod \"nova-metadata-0\" (UID: \"cecbf859-aee4-42fc-9ec0-6e552f405df6\") " pod="openstack/nova-metadata-0" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.683547 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shj4g\" (UniqueName: \"kubernetes.io/projected/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-kube-api-access-shj4g\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.683564 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.683574 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2353dfd-80a8-438b-bc21-3cc6c4800d8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.683645 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cecbf859-aee4-42fc-9ec0-6e552f405df6-logs\") pod \"nova-metadata-0\" (UID: \"cecbf859-aee4-42fc-9ec0-6e552f405df6\") " pod="openstack/nova-metadata-0" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.686586 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cecbf859-aee4-42fc-9ec0-6e552f405df6-config-data\") pod \"nova-metadata-0\" (UID: \"cecbf859-aee4-42fc-9ec0-6e552f405df6\") " pod="openstack/nova-metadata-0" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.687011 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cecbf859-aee4-42fc-9ec0-6e552f405df6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cecbf859-aee4-42fc-9ec0-6e552f405df6\") " pod="openstack/nova-metadata-0" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.687030 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cecbf859-aee4-42fc-9ec0-6e552f405df6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cecbf859-aee4-42fc-9ec0-6e552f405df6\") " pod="openstack/nova-metadata-0" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.699647 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkzfz\" (UniqueName: \"kubernetes.io/projected/cecbf859-aee4-42fc-9ec0-6e552f405df6-kube-api-access-tkzfz\") pod \"nova-metadata-0\" (UID: \"cecbf859-aee4-42fc-9ec0-6e552f405df6\") " pod="openstack/nova-metadata-0" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.727173 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.802513 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c77d8b67c-xmzls"] Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.802954 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" podUID="95c9a914-32f8-4d28-8791-91c548000b4a" containerName="dnsmasq-dns" containerID="cri-o://0c2e96c4f25c59a111ca1bb6f02d86282c212ea152f0246806e8b8d1952e4910" gracePeriod=10 Dec 05 10:43:24 crc kubenswrapper[4796]: I1205 10:43:24.806901 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.052219 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.195082 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k77k8\" (UniqueName: \"kubernetes.io/projected/69f827ad-08b4-4d78-869a-f1263b94266e-kube-api-access-k77k8\") pod \"69f827ad-08b4-4d78-869a-f1263b94266e\" (UID: \"69f827ad-08b4-4d78-869a-f1263b94266e\") " Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.195245 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69f827ad-08b4-4d78-869a-f1263b94266e-config-data\") pod \"69f827ad-08b4-4d78-869a-f1263b94266e\" (UID: \"69f827ad-08b4-4d78-869a-f1263b94266e\") " Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.195395 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f827ad-08b4-4d78-869a-f1263b94266e-combined-ca-bundle\") pod \"69f827ad-08b4-4d78-869a-f1263b94266e\" (UID: \"69f827ad-08b4-4d78-869a-f1263b94266e\") " Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.201252 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69f827ad-08b4-4d78-869a-f1263b94266e-kube-api-access-k77k8" (OuterVolumeSpecName: "kube-api-access-k77k8") pod "69f827ad-08b4-4d78-869a-f1263b94266e" (UID: "69f827ad-08b4-4d78-869a-f1263b94266e"). InnerVolumeSpecName "kube-api-access-k77k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.222561 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69f827ad-08b4-4d78-869a-f1263b94266e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69f827ad-08b4-4d78-869a-f1263b94266e" (UID: "69f827ad-08b4-4d78-869a-f1263b94266e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.224386 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69f827ad-08b4-4d78-869a-f1263b94266e-config-data" (OuterVolumeSpecName: "config-data") pod "69f827ad-08b4-4d78-869a-f1263b94266e" (UID: "69f827ad-08b4-4d78-869a-f1263b94266e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.273076 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.297610 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k77k8\" (UniqueName: \"kubernetes.io/projected/69f827ad-08b4-4d78-869a-f1263b94266e-kube-api-access-k77k8\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.297645 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69f827ad-08b4-4d78-869a-f1263b94266e-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.297655 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69f827ad-08b4-4d78-869a-f1263b94266e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.398592 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh5v8\" (UniqueName: \"kubernetes.io/projected/95c9a914-32f8-4d28-8791-91c548000b4a-kube-api-access-wh5v8\") pod \"95c9a914-32f8-4d28-8791-91c548000b4a\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.398833 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-ovsdbserver-nb\") pod \"95c9a914-32f8-4d28-8791-91c548000b4a\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.398891 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-config\") pod \"95c9a914-32f8-4d28-8791-91c548000b4a\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.398946 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-ovsdbserver-sb\") pod \"95c9a914-32f8-4d28-8791-91c548000b4a\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.399051 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-dns-svc\") pod \"95c9a914-32f8-4d28-8791-91c548000b4a\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.399176 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-dns-swift-storage-0\") pod \"95c9a914-32f8-4d28-8791-91c548000b4a\" (UID: \"95c9a914-32f8-4d28-8791-91c548000b4a\") " Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.402580 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95c9a914-32f8-4d28-8791-91c548000b4a-kube-api-access-wh5v8" (OuterVolumeSpecName: "kube-api-access-wh5v8") pod "95c9a914-32f8-4d28-8791-91c548000b4a" (UID: "95c9a914-32f8-4d28-8791-91c548000b4a"). InnerVolumeSpecName "kube-api-access-wh5v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.414532 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.425307 4796 generic.go:334] "Generic (PLEG): container finished" podID="69f827ad-08b4-4d78-869a-f1263b94266e" containerID="a647e4754e69c367aa9203f884243f57239b27a380a48196cc994bc6bab3c66a" exitCode=0 Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.425350 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.425366 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"69f827ad-08b4-4d78-869a-f1263b94266e","Type":"ContainerDied","Data":"a647e4754e69c367aa9203f884243f57239b27a380a48196cc994bc6bab3c66a"} Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.425388 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"69f827ad-08b4-4d78-869a-f1263b94266e","Type":"ContainerDied","Data":"be201726bf4649f698b854cd4cdf9514ebbd7605ff19b9e50f76cb3664044c83"} Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.425409 4796 scope.go:117] "RemoveContainer" containerID="a647e4754e69c367aa9203f884243f57239b27a380a48196cc994bc6bab3c66a" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.427775 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c2353dfd-80a8-438b-bc21-3cc6c4800d8f","Type":"ContainerDied","Data":"75acaaa3471607f11eda322d531864f9d2383503504b8ec02de22bc37c13a48a"} Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.428014 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.435620 4796 generic.go:334] "Generic (PLEG): container finished" podID="95c9a914-32f8-4d28-8791-91c548000b4a" containerID="0c2e96c4f25c59a111ca1bb6f02d86282c212ea152f0246806e8b8d1952e4910" exitCode=0 Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.435639 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" event={"ID":"95c9a914-32f8-4d28-8791-91c548000b4a","Type":"ContainerDied","Data":"0c2e96c4f25c59a111ca1bb6f02d86282c212ea152f0246806e8b8d1952e4910"} Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.435664 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" event={"ID":"95c9a914-32f8-4d28-8791-91c548000b4a","Type":"ContainerDied","Data":"300186872dcd1042868ae9e098d5739c28505ebe392dfa8116312ad08dbc1d41"} Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.435696 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c77d8b67c-xmzls" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.440418 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "95c9a914-32f8-4d28-8791-91c548000b4a" (UID: "95c9a914-32f8-4d28-8791-91c548000b4a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.455090 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "95c9a914-32f8-4d28-8791-91c548000b4a" (UID: "95c9a914-32f8-4d28-8791-91c548000b4a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.468541 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.469772 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-config" (OuterVolumeSpecName: "config") pod "95c9a914-32f8-4d28-8791-91c548000b4a" (UID: "95c9a914-32f8-4d28-8791-91c548000b4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.477509 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.483611 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 10:43:25 crc kubenswrapper[4796]: E1205 10:43:25.483963 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2353dfd-80a8-438b-bc21-3cc6c4800d8f" containerName="nova-api-log" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.483976 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2353dfd-80a8-438b-bc21-3cc6c4800d8f" containerName="nova-api-log" Dec 05 10:43:25 crc kubenswrapper[4796]: E1205 10:43:25.483996 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c9a914-32f8-4d28-8791-91c548000b4a" containerName="init" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.484001 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c9a914-32f8-4d28-8791-91c548000b4a" containerName="init" Dec 05 10:43:25 crc kubenswrapper[4796]: E1205 10:43:25.484023 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f827ad-08b4-4d78-869a-f1263b94266e" containerName="nova-scheduler-scheduler" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.484028 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f827ad-08b4-4d78-869a-f1263b94266e" containerName="nova-scheduler-scheduler" Dec 05 10:43:25 crc kubenswrapper[4796]: E1205 10:43:25.484040 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c9a914-32f8-4d28-8791-91c548000b4a" containerName="dnsmasq-dns" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.484045 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c9a914-32f8-4d28-8791-91c548000b4a" containerName="dnsmasq-dns" Dec 05 10:43:25 crc kubenswrapper[4796]: E1205 10:43:25.484054 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2353dfd-80a8-438b-bc21-3cc6c4800d8f" containerName="nova-api-api" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.484059 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2353dfd-80a8-438b-bc21-3cc6c4800d8f" containerName="nova-api-api" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.484224 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="95c9a914-32f8-4d28-8791-91c548000b4a" containerName="dnsmasq-dns" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.484239 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="69f827ad-08b4-4d78-869a-f1263b94266e" containerName="nova-scheduler-scheduler" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.484250 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2353dfd-80a8-438b-bc21-3cc6c4800d8f" containerName="nova-api-api" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.484259 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2353dfd-80a8-438b-bc21-3cc6c4800d8f" containerName="nova-api-log" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.484839 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.489132 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "95c9a914-32f8-4d28-8791-91c548000b4a" (UID: "95c9a914-32f8-4d28-8791-91c548000b4a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:43:25 crc kubenswrapper[4796]: E1205 10:43:25.489675 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69f827ad_08b4_4d78_869a_f1263b94266e.slice\": RecentStats: unable to find data in memory cache]" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.491240 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.500942 4796 scope.go:117] "RemoveContainer" containerID="a647e4754e69c367aa9203f884243f57239b27a380a48196cc994bc6bab3c66a" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.502781 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh5v8\" (UniqueName: \"kubernetes.io/projected/95c9a914-32f8-4d28-8791-91c548000b4a-kube-api-access-wh5v8\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.504393 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.504417 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.504427 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.504439 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:25 crc kubenswrapper[4796]: E1205 10:43:25.502881 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a647e4754e69c367aa9203f884243f57239b27a380a48196cc994bc6bab3c66a\": container with ID starting with a647e4754e69c367aa9203f884243f57239b27a380a48196cc994bc6bab3c66a not found: ID does not exist" containerID="a647e4754e69c367aa9203f884243f57239b27a380a48196cc994bc6bab3c66a" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.504473 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a647e4754e69c367aa9203f884243f57239b27a380a48196cc994bc6bab3c66a"} err="failed to get container status \"a647e4754e69c367aa9203f884243f57239b27a380a48196cc994bc6bab3c66a\": rpc error: code = NotFound desc = could not find container \"a647e4754e69c367aa9203f884243f57239b27a380a48196cc994bc6bab3c66a\": container with ID starting with a647e4754e69c367aa9203f884243f57239b27a380a48196cc994bc6bab3c66a not found: ID does not exist" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.504498 4796 scope.go:117] "RemoveContainer" containerID="45a66aeaeb514152335e169451952452545f5856ad41977c6906419ae0fa8796" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.514731 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.534125 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.540925 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.552203 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "95c9a914-32f8-4d28-8791-91c548000b4a" (UID: "95c9a914-32f8-4d28-8791-91c548000b4a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.553213 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.554861 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.567264 4796 scope.go:117] "RemoveContainer" containerID="b87ed26cd858632ba17b8109a15e4fa8855a592eefec547b95b57c0e91cb07ab" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.567918 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.583724 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.605986 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlmpd\" (UniqueName: \"kubernetes.io/projected/19820d25-5afd-4e22-b618-d7ee6e8d36b4-kube-api-access-dlmpd\") pod \"nova-scheduler-0\" (UID: \"19820d25-5afd-4e22-b618-d7ee6e8d36b4\") " pod="openstack/nova-scheduler-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.606072 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19820d25-5afd-4e22-b618-d7ee6e8d36b4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"19820d25-5afd-4e22-b618-d7ee6e8d36b4\") " pod="openstack/nova-scheduler-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.606101 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19820d25-5afd-4e22-b618-d7ee6e8d36b4-config-data\") pod \"nova-scheduler-0\" (UID: \"19820d25-5afd-4e22-b618-d7ee6e8d36b4\") " pod="openstack/nova-scheduler-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.606225 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95c9a914-32f8-4d28-8791-91c548000b4a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.640153 4796 scope.go:117] "RemoveContainer" containerID="0c2e96c4f25c59a111ca1bb6f02d86282c212ea152f0246806e8b8d1952e4910" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.664044 4796 scope.go:117] "RemoveContainer" containerID="322c81b0f1b9c0b3813005e1cafcd5bcecfeb2864ad440ce554843d43b5fb545" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.705084 4796 scope.go:117] "RemoveContainer" containerID="0c2e96c4f25c59a111ca1bb6f02d86282c212ea152f0246806e8b8d1952e4910" Dec 05 10:43:25 crc kubenswrapper[4796]: E1205 10:43:25.705463 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c2e96c4f25c59a111ca1bb6f02d86282c212ea152f0246806e8b8d1952e4910\": container with ID starting with 0c2e96c4f25c59a111ca1bb6f02d86282c212ea152f0246806e8b8d1952e4910 not found: ID does not exist" containerID="0c2e96c4f25c59a111ca1bb6f02d86282c212ea152f0246806e8b8d1952e4910" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.705489 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c2e96c4f25c59a111ca1bb6f02d86282c212ea152f0246806e8b8d1952e4910"} err="failed to get container status \"0c2e96c4f25c59a111ca1bb6f02d86282c212ea152f0246806e8b8d1952e4910\": rpc error: code = NotFound desc = could not find container \"0c2e96c4f25c59a111ca1bb6f02d86282c212ea152f0246806e8b8d1952e4910\": container with ID starting with 0c2e96c4f25c59a111ca1bb6f02d86282c212ea152f0246806e8b8d1952e4910 not found: ID does not exist" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.705510 4796 scope.go:117] "RemoveContainer" containerID="322c81b0f1b9c0b3813005e1cafcd5bcecfeb2864ad440ce554843d43b5fb545" Dec 05 10:43:25 crc kubenswrapper[4796]: E1205 10:43:25.705879 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"322c81b0f1b9c0b3813005e1cafcd5bcecfeb2864ad440ce554843d43b5fb545\": container with ID starting with 322c81b0f1b9c0b3813005e1cafcd5bcecfeb2864ad440ce554843d43b5fb545 not found: ID does not exist" containerID="322c81b0f1b9c0b3813005e1cafcd5bcecfeb2864ad440ce554843d43b5fb545" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.705920 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"322c81b0f1b9c0b3813005e1cafcd5bcecfeb2864ad440ce554843d43b5fb545"} err="failed to get container status \"322c81b0f1b9c0b3813005e1cafcd5bcecfeb2864ad440ce554843d43b5fb545\": rpc error: code = NotFound desc = could not find container \"322c81b0f1b9c0b3813005e1cafcd5bcecfeb2864ad440ce554843d43b5fb545\": container with ID starting with 322c81b0f1b9c0b3813005e1cafcd5bcecfeb2864ad440ce554843d43b5fb545 not found: ID does not exist" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.708018 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19820d25-5afd-4e22-b618-d7ee6e8d36b4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"19820d25-5afd-4e22-b618-d7ee6e8d36b4\") " pod="openstack/nova-scheduler-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.708051 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aae40e3-76ad-42ac-bf8a-1d05e9074798-logs\") pod \"nova-api-0\" (UID: \"0aae40e3-76ad-42ac-bf8a-1d05e9074798\") " pod="openstack/nova-api-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.708077 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aae40e3-76ad-42ac-bf8a-1d05e9074798-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0aae40e3-76ad-42ac-bf8a-1d05e9074798\") " pod="openstack/nova-api-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.708098 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19820d25-5afd-4e22-b618-d7ee6e8d36b4-config-data\") pod \"nova-scheduler-0\" (UID: \"19820d25-5afd-4e22-b618-d7ee6e8d36b4\") " pod="openstack/nova-scheduler-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.708223 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aae40e3-76ad-42ac-bf8a-1d05e9074798-config-data\") pod \"nova-api-0\" (UID: \"0aae40e3-76ad-42ac-bf8a-1d05e9074798\") " pod="openstack/nova-api-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.708555 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pldbm\" (UniqueName: \"kubernetes.io/projected/0aae40e3-76ad-42ac-bf8a-1d05e9074798-kube-api-access-pldbm\") pod \"nova-api-0\" (UID: \"0aae40e3-76ad-42ac-bf8a-1d05e9074798\") " pod="openstack/nova-api-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.708594 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlmpd\" (UniqueName: \"kubernetes.io/projected/19820d25-5afd-4e22-b618-d7ee6e8d36b4-kube-api-access-dlmpd\") pod \"nova-scheduler-0\" (UID: \"19820d25-5afd-4e22-b618-d7ee6e8d36b4\") " pod="openstack/nova-scheduler-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.711646 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19820d25-5afd-4e22-b618-d7ee6e8d36b4-config-data\") pod \"nova-scheduler-0\" (UID: \"19820d25-5afd-4e22-b618-d7ee6e8d36b4\") " pod="openstack/nova-scheduler-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.711879 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19820d25-5afd-4e22-b618-d7ee6e8d36b4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"19820d25-5afd-4e22-b618-d7ee6e8d36b4\") " pod="openstack/nova-scheduler-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.722044 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlmpd\" (UniqueName: \"kubernetes.io/projected/19820d25-5afd-4e22-b618-d7ee6e8d36b4-kube-api-access-dlmpd\") pod \"nova-scheduler-0\" (UID: \"19820d25-5afd-4e22-b618-d7ee6e8d36b4\") " pod="openstack/nova-scheduler-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.809945 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pldbm\" (UniqueName: \"kubernetes.io/projected/0aae40e3-76ad-42ac-bf8a-1d05e9074798-kube-api-access-pldbm\") pod \"nova-api-0\" (UID: \"0aae40e3-76ad-42ac-bf8a-1d05e9074798\") " pod="openstack/nova-api-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.810012 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aae40e3-76ad-42ac-bf8a-1d05e9074798-logs\") pod \"nova-api-0\" (UID: \"0aae40e3-76ad-42ac-bf8a-1d05e9074798\") " pod="openstack/nova-api-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.810033 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aae40e3-76ad-42ac-bf8a-1d05e9074798-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0aae40e3-76ad-42ac-bf8a-1d05e9074798\") " pod="openstack/nova-api-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.810063 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aae40e3-76ad-42ac-bf8a-1d05e9074798-config-data\") pod \"nova-api-0\" (UID: \"0aae40e3-76ad-42ac-bf8a-1d05e9074798\") " pod="openstack/nova-api-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.810837 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aae40e3-76ad-42ac-bf8a-1d05e9074798-logs\") pod \"nova-api-0\" (UID: \"0aae40e3-76ad-42ac-bf8a-1d05e9074798\") " pod="openstack/nova-api-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.819197 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aae40e3-76ad-42ac-bf8a-1d05e9074798-config-data\") pod \"nova-api-0\" (UID: \"0aae40e3-76ad-42ac-bf8a-1d05e9074798\") " pod="openstack/nova-api-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.820169 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aae40e3-76ad-42ac-bf8a-1d05e9074798-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0aae40e3-76ad-42ac-bf8a-1d05e9074798\") " pod="openstack/nova-api-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.824553 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.832706 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pldbm\" (UniqueName: \"kubernetes.io/projected/0aae40e3-76ad-42ac-bf8a-1d05e9074798-kube-api-access-pldbm\") pod \"nova-api-0\" (UID: \"0aae40e3-76ad-42ac-bf8a-1d05e9074798\") " pod="openstack/nova-api-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.899935 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.967573 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c77d8b67c-xmzls"] Dec 05 10:43:25 crc kubenswrapper[4796]: I1205 10:43:25.976629 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c77d8b67c-xmzls"] Dec 05 10:43:26 crc kubenswrapper[4796]: I1205 10:43:26.045288 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51ee3167-a06c-4dc0-85d8-0b87c7bbd67c" path="/var/lib/kubelet/pods/51ee3167-a06c-4dc0-85d8-0b87c7bbd67c/volumes" Dec 05 10:43:26 crc kubenswrapper[4796]: I1205 10:43:26.046080 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69f827ad-08b4-4d78-869a-f1263b94266e" path="/var/lib/kubelet/pods/69f827ad-08b4-4d78-869a-f1263b94266e/volumes" Dec 05 10:43:26 crc kubenswrapper[4796]: I1205 10:43:26.046539 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95c9a914-32f8-4d28-8791-91c548000b4a" path="/var/lib/kubelet/pods/95c9a914-32f8-4d28-8791-91c548000b4a/volumes" Dec 05 10:43:26 crc kubenswrapper[4796]: I1205 10:43:26.048039 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2353dfd-80a8-438b-bc21-3cc6c4800d8f" path="/var/lib/kubelet/pods/c2353dfd-80a8-438b-bc21-3cc6c4800d8f/volumes" Dec 05 10:43:26 crc kubenswrapper[4796]: I1205 10:43:26.238082 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 10:43:26 crc kubenswrapper[4796]: I1205 10:43:26.335836 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 10:43:26 crc kubenswrapper[4796]: I1205 10:43:26.449497 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cecbf859-aee4-42fc-9ec0-6e552f405df6","Type":"ContainerStarted","Data":"fd1c85398e52080d36f578e0ab0bcead467de3c88188b6a896613c4113e7eb28"} Dec 05 10:43:26 crc kubenswrapper[4796]: I1205 10:43:26.449541 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cecbf859-aee4-42fc-9ec0-6e552f405df6","Type":"ContainerStarted","Data":"b5d46b11549ee601e73a39682a5dea69a95488b41bb630d7e86c0c6c5c2fa0af"} Dec 05 10:43:26 crc kubenswrapper[4796]: I1205 10:43:26.449551 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cecbf859-aee4-42fc-9ec0-6e552f405df6","Type":"ContainerStarted","Data":"dbae647f08c51135c3fd7a31ece5c757c644af5c3222ca239c4bace6ece2003c"} Dec 05 10:43:26 crc kubenswrapper[4796]: I1205 10:43:26.456984 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0aae40e3-76ad-42ac-bf8a-1d05e9074798","Type":"ContainerStarted","Data":"0656bb57b04338402146da7944052bbb71c0ff6e66d5e94a25c298d90155e3c7"} Dec 05 10:43:26 crc kubenswrapper[4796]: I1205 10:43:26.460289 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"19820d25-5afd-4e22-b618-d7ee6e8d36b4","Type":"ContainerStarted","Data":"bf90440e7964caf3929151b8bd2434289dc07e0cf44c35beb722c0776ce0c78d"} Dec 05 10:43:26 crc kubenswrapper[4796]: I1205 10:43:26.460343 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"19820d25-5afd-4e22-b618-d7ee6e8d36b4","Type":"ContainerStarted","Data":"c6046f9913a98f602830f2d2dab91757e14a9a77bdc2582ad4a2f16b7f369b44"} Dec 05 10:43:26 crc kubenswrapper[4796]: I1205 10:43:26.470557 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.470547156 podStartE2EDuration="2.470547156s" podCreationTimestamp="2025-12-05 10:43:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:43:26.462203689 +0000 UTC m=+952.750309201" watchObservedRunningTime="2025-12-05 10:43:26.470547156 +0000 UTC m=+952.758652669" Dec 05 10:43:26 crc kubenswrapper[4796]: I1205 10:43:26.480136 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.480127039 podStartE2EDuration="1.480127039s" podCreationTimestamp="2025-12-05 10:43:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:43:26.474002517 +0000 UTC m=+952.762108030" watchObservedRunningTime="2025-12-05 10:43:26.480127039 +0000 UTC m=+952.768232553" Dec 05 10:43:27 crc kubenswrapper[4796]: I1205 10:43:27.470481 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0aae40e3-76ad-42ac-bf8a-1d05e9074798","Type":"ContainerStarted","Data":"7466ec57d05f0897c196d7ce17add0c4721d613a2e3283299f3f7fb464b51c0d"} Dec 05 10:43:27 crc kubenswrapper[4796]: I1205 10:43:27.470782 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0aae40e3-76ad-42ac-bf8a-1d05e9074798","Type":"ContainerStarted","Data":"b895595f1eacec3fee5653f312dd0bb192227d4ced7afc4beb0190bec6fb448f"} Dec 05 10:43:27 crc kubenswrapper[4796]: I1205 10:43:27.494215 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.49420378 podStartE2EDuration="2.49420378s" podCreationTimestamp="2025-12-05 10:43:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:43:27.48984488 +0000 UTC m=+953.777950393" watchObservedRunningTime="2025-12-05 10:43:27.49420378 +0000 UTC m=+953.782309294" Dec 05 10:43:29 crc kubenswrapper[4796]: I1205 10:43:29.807843 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 10:43:29 crc kubenswrapper[4796]: I1205 10:43:29.809128 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 10:43:30 crc kubenswrapper[4796]: I1205 10:43:30.474403 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 05 10:43:30 crc kubenswrapper[4796]: I1205 10:43:30.825870 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 05 10:43:32 crc kubenswrapper[4796]: I1205 10:43:32.822661 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 05 10:43:33 crc kubenswrapper[4796]: I1205 10:43:33.410039 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 10:43:33 crc kubenswrapper[4796]: I1205 10:43:33.410246 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="d25dde1a-1eba-4b84-b637-134daea7451e" containerName="kube-state-metrics" containerID="cri-o://26359e43cd858b1a5ea823cf04c23cf0b9c6ebc51e05ec16bcb98edd7baed580" gracePeriod=30 Dec 05 10:43:33 crc kubenswrapper[4796]: I1205 10:43:33.827375 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 10:43:33 crc kubenswrapper[4796]: I1205 10:43:33.876839 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr9ft\" (UniqueName: \"kubernetes.io/projected/d25dde1a-1eba-4b84-b637-134daea7451e-kube-api-access-sr9ft\") pod \"d25dde1a-1eba-4b84-b637-134daea7451e\" (UID: \"d25dde1a-1eba-4b84-b637-134daea7451e\") " Dec 05 10:43:33 crc kubenswrapper[4796]: I1205 10:43:33.881572 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d25dde1a-1eba-4b84-b637-134daea7451e-kube-api-access-sr9ft" (OuterVolumeSpecName: "kube-api-access-sr9ft") pod "d25dde1a-1eba-4b84-b637-134daea7451e" (UID: "d25dde1a-1eba-4b84-b637-134daea7451e"). InnerVolumeSpecName "kube-api-access-sr9ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:43:33 crc kubenswrapper[4796]: I1205 10:43:33.979231 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr9ft\" (UniqueName: \"kubernetes.io/projected/d25dde1a-1eba-4b84-b637-134daea7451e-kube-api-access-sr9ft\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.518845 4796 generic.go:334] "Generic (PLEG): container finished" podID="d25dde1a-1eba-4b84-b637-134daea7451e" containerID="26359e43cd858b1a5ea823cf04c23cf0b9c6ebc51e05ec16bcb98edd7baed580" exitCode=2 Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.519126 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d25dde1a-1eba-4b84-b637-134daea7451e","Type":"ContainerDied","Data":"26359e43cd858b1a5ea823cf04c23cf0b9c6ebc51e05ec16bcb98edd7baed580"} Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.519155 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d25dde1a-1eba-4b84-b637-134daea7451e","Type":"ContainerDied","Data":"d9904b1d0ad69d9d856611bc8401e0deb2d0222e9a98f739e11fd3ff551e87e2"} Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.519175 4796 scope.go:117] "RemoveContainer" containerID="26359e43cd858b1a5ea823cf04c23cf0b9c6ebc51e05ec16bcb98edd7baed580" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.519288 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.538789 4796 scope.go:117] "RemoveContainer" containerID="26359e43cd858b1a5ea823cf04c23cf0b9c6ebc51e05ec16bcb98edd7baed580" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.538875 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 10:43:34 crc kubenswrapper[4796]: E1205 10:43:34.539579 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26359e43cd858b1a5ea823cf04c23cf0b9c6ebc51e05ec16bcb98edd7baed580\": container with ID starting with 26359e43cd858b1a5ea823cf04c23cf0b9c6ebc51e05ec16bcb98edd7baed580 not found: ID does not exist" containerID="26359e43cd858b1a5ea823cf04c23cf0b9c6ebc51e05ec16bcb98edd7baed580" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.539607 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26359e43cd858b1a5ea823cf04c23cf0b9c6ebc51e05ec16bcb98edd7baed580"} err="failed to get container status \"26359e43cd858b1a5ea823cf04c23cf0b9c6ebc51e05ec16bcb98edd7baed580\": rpc error: code = NotFound desc = could not find container \"26359e43cd858b1a5ea823cf04c23cf0b9c6ebc51e05ec16bcb98edd7baed580\": container with ID starting with 26359e43cd858b1a5ea823cf04c23cf0b9c6ebc51e05ec16bcb98edd7baed580 not found: ID does not exist" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.551424 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.558957 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 10:43:34 crc kubenswrapper[4796]: E1205 10:43:34.559281 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d25dde1a-1eba-4b84-b637-134daea7451e" containerName="kube-state-metrics" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.559298 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="d25dde1a-1eba-4b84-b637-134daea7451e" containerName="kube-state-metrics" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.559483 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="d25dde1a-1eba-4b84-b637-134daea7451e" containerName="kube-state-metrics" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.560030 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.562114 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.565283 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.577723 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.589399 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6ztn\" (UniqueName: \"kubernetes.io/projected/0e251696-adfe-46cb-87c3-651b3e038af2-kube-api-access-g6ztn\") pod \"kube-state-metrics-0\" (UID: \"0e251696-adfe-46cb-87c3-651b3e038af2\") " pod="openstack/kube-state-metrics-0" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.589472 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e251696-adfe-46cb-87c3-651b3e038af2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0e251696-adfe-46cb-87c3-651b3e038af2\") " pod="openstack/kube-state-metrics-0" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.589506 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e251696-adfe-46cb-87c3-651b3e038af2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0e251696-adfe-46cb-87c3-651b3e038af2\") " pod="openstack/kube-state-metrics-0" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.589599 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0e251696-adfe-46cb-87c3-651b3e038af2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0e251696-adfe-46cb-87c3-651b3e038af2\") " pod="openstack/kube-state-metrics-0" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.691062 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0e251696-adfe-46cb-87c3-651b3e038af2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0e251696-adfe-46cb-87c3-651b3e038af2\") " pod="openstack/kube-state-metrics-0" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.691167 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6ztn\" (UniqueName: \"kubernetes.io/projected/0e251696-adfe-46cb-87c3-651b3e038af2-kube-api-access-g6ztn\") pod \"kube-state-metrics-0\" (UID: \"0e251696-adfe-46cb-87c3-651b3e038af2\") " pod="openstack/kube-state-metrics-0" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.691220 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e251696-adfe-46cb-87c3-651b3e038af2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0e251696-adfe-46cb-87c3-651b3e038af2\") " pod="openstack/kube-state-metrics-0" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.691271 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e251696-adfe-46cb-87c3-651b3e038af2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0e251696-adfe-46cb-87c3-651b3e038af2\") " pod="openstack/kube-state-metrics-0" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.696583 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0e251696-adfe-46cb-87c3-651b3e038af2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0e251696-adfe-46cb-87c3-651b3e038af2\") " pod="openstack/kube-state-metrics-0" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.696790 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e251696-adfe-46cb-87c3-651b3e038af2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0e251696-adfe-46cb-87c3-651b3e038af2\") " pod="openstack/kube-state-metrics-0" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.697171 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e251696-adfe-46cb-87c3-651b3e038af2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0e251696-adfe-46cb-87c3-651b3e038af2\") " pod="openstack/kube-state-metrics-0" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.705197 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6ztn\" (UniqueName: \"kubernetes.io/projected/0e251696-adfe-46cb-87c3-651b3e038af2-kube-api-access-g6ztn\") pod \"kube-state-metrics-0\" (UID: \"0e251696-adfe-46cb-87c3-651b3e038af2\") " pod="openstack/kube-state-metrics-0" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.807634 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.807696 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.865741 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.866231 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f547873-a935-4345-8617-42aa6e55b2bd" containerName="ceilometer-central-agent" containerID="cri-o://19b6d96a5ed87ec3a1484aa9b53a2d7aae984bd8cbda0df45ad0fb79f09258ce" gracePeriod=30 Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.866256 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f547873-a935-4345-8617-42aa6e55b2bd" containerName="proxy-httpd" containerID="cri-o://f0c6e8b2ea1d3b7d4b2dbb43a071d2b4f58512dbe10728e7cae9af008cdbe94a" gracePeriod=30 Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.866307 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f547873-a935-4345-8617-42aa6e55b2bd" containerName="sg-core" containerID="cri-o://719467e40937aaaab50d3c4a85b57847a7fead9e5df4e634da5107b12569b278" gracePeriod=30 Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.866355 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f547873-a935-4345-8617-42aa6e55b2bd" containerName="ceilometer-notification-agent" containerID="cri-o://c8f8e644ca0548a21a7b27a3e8cdf9491dcaced2185d01ef2e3715d4ebd397d6" gracePeriod=30 Dec 05 10:43:34 crc kubenswrapper[4796]: I1205 10:43:34.876264 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 05 10:43:35 crc kubenswrapper[4796]: I1205 10:43:35.177433 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:43:35 crc kubenswrapper[4796]: I1205 10:43:35.177541 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:43:35 crc kubenswrapper[4796]: I1205 10:43:35.177613 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 10:43:35 crc kubenswrapper[4796]: I1205 10:43:35.178427 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"43e07b991ca33a9b26481e28d45698e5b0116b0edd51039d2f5f22853ca65e61"} pod="openshift-machine-config-operator/machine-config-daemon-9pllw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 10:43:35 crc kubenswrapper[4796]: I1205 10:43:35.178525 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" containerID="cri-o://43e07b991ca33a9b26481e28d45698e5b0116b0edd51039d2f5f22853ca65e61" gracePeriod=600 Dec 05 10:43:35 crc kubenswrapper[4796]: I1205 10:43:35.278951 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 05 10:43:35 crc kubenswrapper[4796]: I1205 10:43:35.531494 4796 generic.go:334] "Generic (PLEG): container finished" podID="2f547873-a935-4345-8617-42aa6e55b2bd" containerID="f0c6e8b2ea1d3b7d4b2dbb43a071d2b4f58512dbe10728e7cae9af008cdbe94a" exitCode=0 Dec 05 10:43:35 crc kubenswrapper[4796]: I1205 10:43:35.531843 4796 generic.go:334] "Generic (PLEG): container finished" podID="2f547873-a935-4345-8617-42aa6e55b2bd" containerID="719467e40937aaaab50d3c4a85b57847a7fead9e5df4e634da5107b12569b278" exitCode=2 Dec 05 10:43:35 crc kubenswrapper[4796]: I1205 10:43:35.531853 4796 generic.go:334] "Generic (PLEG): container finished" podID="2f547873-a935-4345-8617-42aa6e55b2bd" containerID="19b6d96a5ed87ec3a1484aa9b53a2d7aae984bd8cbda0df45ad0fb79f09258ce" exitCode=0 Dec 05 10:43:35 crc kubenswrapper[4796]: I1205 10:43:35.531566 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f547873-a935-4345-8617-42aa6e55b2bd","Type":"ContainerDied","Data":"f0c6e8b2ea1d3b7d4b2dbb43a071d2b4f58512dbe10728e7cae9af008cdbe94a"} Dec 05 10:43:35 crc kubenswrapper[4796]: I1205 10:43:35.531928 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f547873-a935-4345-8617-42aa6e55b2bd","Type":"ContainerDied","Data":"719467e40937aaaab50d3c4a85b57847a7fead9e5df4e634da5107b12569b278"} Dec 05 10:43:35 crc kubenswrapper[4796]: I1205 10:43:35.531946 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f547873-a935-4345-8617-42aa6e55b2bd","Type":"ContainerDied","Data":"19b6d96a5ed87ec3a1484aa9b53a2d7aae984bd8cbda0df45ad0fb79f09258ce"} Dec 05 10:43:35 crc kubenswrapper[4796]: I1205 10:43:35.533574 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0e251696-adfe-46cb-87c3-651b3e038af2","Type":"ContainerStarted","Data":"fa5c62bf6ac280f2effc73e6057c1b0a85721704307f03836d8f2c89711faa79"} Dec 05 10:43:35 crc kubenswrapper[4796]: I1205 10:43:35.537050 4796 generic.go:334] "Generic (PLEG): container finished" podID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerID="43e07b991ca33a9b26481e28d45698e5b0116b0edd51039d2f5f22853ca65e61" exitCode=0 Dec 05 10:43:35 crc kubenswrapper[4796]: I1205 10:43:35.537076 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerDied","Data":"43e07b991ca33a9b26481e28d45698e5b0116b0edd51039d2f5f22853ca65e61"} Dec 05 10:43:35 crc kubenswrapper[4796]: I1205 10:43:35.537092 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerStarted","Data":"9b9837e0a00511791d7723ec246d4ce28520c7946787df15f8db71f53bf3790d"} Dec 05 10:43:35 crc kubenswrapper[4796]: I1205 10:43:35.537108 4796 scope.go:117] "RemoveContainer" containerID="11955f72d8cab8d1dbc9ce29d0048cc5cd67ff89aba57e0b67af6a83dc91a6d7" Dec 05 10:43:35 crc kubenswrapper[4796]: I1205 10:43:35.822835 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cecbf859-aee4-42fc-9ec0-6e552f405df6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 10:43:35 crc kubenswrapper[4796]: I1205 10:43:35.822851 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cecbf859-aee4-42fc-9ec0-6e552f405df6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 10:43:35 crc kubenswrapper[4796]: I1205 10:43:35.825631 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 05 10:43:35 crc kubenswrapper[4796]: I1205 10:43:35.850159 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 05 10:43:35 crc kubenswrapper[4796]: I1205 10:43:35.900271 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 10:43:35 crc kubenswrapper[4796]: I1205 10:43:35.900306 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 10:43:36 crc kubenswrapper[4796]: I1205 10:43:36.053385 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d25dde1a-1eba-4b84-b637-134daea7451e" path="/var/lib/kubelet/pods/d25dde1a-1eba-4b84-b637-134daea7451e/volumes" Dec 05 10:43:36 crc kubenswrapper[4796]: I1205 10:43:36.551384 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0e251696-adfe-46cb-87c3-651b3e038af2","Type":"ContainerStarted","Data":"6d63b876b86aa1e544709ee89d91ae84a5d76afce0258ba0a6c5ede6bd4566c9"} Dec 05 10:43:36 crc kubenswrapper[4796]: I1205 10:43:36.551696 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 05 10:43:36 crc kubenswrapper[4796]: I1205 10:43:36.570660 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.276145781 podStartE2EDuration="2.57064672s" podCreationTimestamp="2025-12-05 10:43:34 +0000 UTC" firstStartedPulling="2025-12-05 10:43:35.288290204 +0000 UTC m=+961.576395717" lastFinishedPulling="2025-12-05 10:43:35.582791142 +0000 UTC m=+961.870896656" observedRunningTime="2025-12-05 10:43:36.569649072 +0000 UTC m=+962.857754586" watchObservedRunningTime="2025-12-05 10:43:36.57064672 +0000 UTC m=+962.858752223" Dec 05 10:43:36 crc kubenswrapper[4796]: I1205 10:43:36.578764 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 05 10:43:36 crc kubenswrapper[4796]: I1205 10:43:36.981900 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0aae40e3-76ad-42ac-bf8a-1d05e9074798" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 10:43:36 crc kubenswrapper[4796]: I1205 10:43:36.982001 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0aae40e3-76ad-42ac-bf8a-1d05e9074798" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 05 10:43:36 crc kubenswrapper[4796]: I1205 10:43:36.986258 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.140442 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-combined-ca-bundle\") pod \"2f547873-a935-4345-8617-42aa6e55b2bd\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.140777 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-sg-core-conf-yaml\") pod \"2f547873-a935-4345-8617-42aa6e55b2bd\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.140887 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f547873-a935-4345-8617-42aa6e55b2bd-log-httpd\") pod \"2f547873-a935-4345-8617-42aa6e55b2bd\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.140992 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-config-data\") pod \"2f547873-a935-4345-8617-42aa6e55b2bd\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.141239 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-scripts\") pod \"2f547873-a935-4345-8617-42aa6e55b2bd\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.141335 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f547873-a935-4345-8617-42aa6e55b2bd-run-httpd\") pod \"2f547873-a935-4345-8617-42aa6e55b2bd\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.141434 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44jzw\" (UniqueName: \"kubernetes.io/projected/2f547873-a935-4345-8617-42aa6e55b2bd-kube-api-access-44jzw\") pod \"2f547873-a935-4345-8617-42aa6e55b2bd\" (UID: \"2f547873-a935-4345-8617-42aa6e55b2bd\") " Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.141627 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f547873-a935-4345-8617-42aa6e55b2bd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2f547873-a935-4345-8617-42aa6e55b2bd" (UID: "2f547873-a935-4345-8617-42aa6e55b2bd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.141831 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f547873-a935-4345-8617-42aa6e55b2bd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2f547873-a935-4345-8617-42aa6e55b2bd" (UID: "2f547873-a935-4345-8617-42aa6e55b2bd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.143076 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f547873-a935-4345-8617-42aa6e55b2bd-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.143126 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f547873-a935-4345-8617-42aa6e55b2bd-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.157776 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-scripts" (OuterVolumeSpecName: "scripts") pod "2f547873-a935-4345-8617-42aa6e55b2bd" (UID: "2f547873-a935-4345-8617-42aa6e55b2bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.169141 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f547873-a935-4345-8617-42aa6e55b2bd-kube-api-access-44jzw" (OuterVolumeSpecName: "kube-api-access-44jzw") pod "2f547873-a935-4345-8617-42aa6e55b2bd" (UID: "2f547873-a935-4345-8617-42aa6e55b2bd"). InnerVolumeSpecName "kube-api-access-44jzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.181423 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2f547873-a935-4345-8617-42aa6e55b2bd" (UID: "2f547873-a935-4345-8617-42aa6e55b2bd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.210956 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f547873-a935-4345-8617-42aa6e55b2bd" (UID: "2f547873-a935-4345-8617-42aa6e55b2bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.227895 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-config-data" (OuterVolumeSpecName: "config-data") pod "2f547873-a935-4345-8617-42aa6e55b2bd" (UID: "2f547873-a935-4345-8617-42aa6e55b2bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.244445 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44jzw\" (UniqueName: \"kubernetes.io/projected/2f547873-a935-4345-8617-42aa6e55b2bd-kube-api-access-44jzw\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.244530 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.244585 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.244634 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.244711 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f547873-a935-4345-8617-42aa6e55b2bd-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.562538 4796 generic.go:334] "Generic (PLEG): container finished" podID="2f547873-a935-4345-8617-42aa6e55b2bd" containerID="c8f8e644ca0548a21a7b27a3e8cdf9491dcaced2185d01ef2e3715d4ebd397d6" exitCode=0 Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.562591 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f547873-a935-4345-8617-42aa6e55b2bd","Type":"ContainerDied","Data":"c8f8e644ca0548a21a7b27a3e8cdf9491dcaced2185d01ef2e3715d4ebd397d6"} Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.562628 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f547873-a935-4345-8617-42aa6e55b2bd","Type":"ContainerDied","Data":"fbcc71120da16af95ee01e3d4c6cf406e2ad26c434dc26b8e29722ee6c72bfed"} Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.562645 4796 scope.go:117] "RemoveContainer" containerID="f0c6e8b2ea1d3b7d4b2dbb43a071d2b4f58512dbe10728e7cae9af008cdbe94a" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.562816 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.587737 4796 scope.go:117] "RemoveContainer" containerID="719467e40937aaaab50d3c4a85b57847a7fead9e5df4e634da5107b12569b278" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.589550 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.594863 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.611332 4796 scope.go:117] "RemoveContainer" containerID="c8f8e644ca0548a21a7b27a3e8cdf9491dcaced2185d01ef2e3715d4ebd397d6" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.626257 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:43:37 crc kubenswrapper[4796]: E1205 10:43:37.626835 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f547873-a935-4345-8617-42aa6e55b2bd" containerName="ceilometer-notification-agent" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.626861 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f547873-a935-4345-8617-42aa6e55b2bd" containerName="ceilometer-notification-agent" Dec 05 10:43:37 crc kubenswrapper[4796]: E1205 10:43:37.626890 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f547873-a935-4345-8617-42aa6e55b2bd" containerName="proxy-httpd" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.626897 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f547873-a935-4345-8617-42aa6e55b2bd" containerName="proxy-httpd" Dec 05 10:43:37 crc kubenswrapper[4796]: E1205 10:43:37.626914 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f547873-a935-4345-8617-42aa6e55b2bd" containerName="sg-core" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.626920 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f547873-a935-4345-8617-42aa6e55b2bd" containerName="sg-core" Dec 05 10:43:37 crc kubenswrapper[4796]: E1205 10:43:37.626948 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f547873-a935-4345-8617-42aa6e55b2bd" containerName="ceilometer-central-agent" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.626954 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f547873-a935-4345-8617-42aa6e55b2bd" containerName="ceilometer-central-agent" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.627214 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f547873-a935-4345-8617-42aa6e55b2bd" containerName="ceilometer-central-agent" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.627234 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f547873-a935-4345-8617-42aa6e55b2bd" containerName="proxy-httpd" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.627246 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f547873-a935-4345-8617-42aa6e55b2bd" containerName="ceilometer-notification-agent" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.627259 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f547873-a935-4345-8617-42aa6e55b2bd" containerName="sg-core" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.629361 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.633201 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.634027 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.634228 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.634378 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.640979 4796 scope.go:117] "RemoveContainer" containerID="19b6d96a5ed87ec3a1484aa9b53a2d7aae984bd8cbda0df45ad0fb79f09258ce" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.668424 4796 scope.go:117] "RemoveContainer" containerID="f0c6e8b2ea1d3b7d4b2dbb43a071d2b4f58512dbe10728e7cae9af008cdbe94a" Dec 05 10:43:37 crc kubenswrapper[4796]: E1205 10:43:37.668899 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0c6e8b2ea1d3b7d4b2dbb43a071d2b4f58512dbe10728e7cae9af008cdbe94a\": container with ID starting with f0c6e8b2ea1d3b7d4b2dbb43a071d2b4f58512dbe10728e7cae9af008cdbe94a not found: ID does not exist" containerID="f0c6e8b2ea1d3b7d4b2dbb43a071d2b4f58512dbe10728e7cae9af008cdbe94a" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.668929 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0c6e8b2ea1d3b7d4b2dbb43a071d2b4f58512dbe10728e7cae9af008cdbe94a"} err="failed to get container status \"f0c6e8b2ea1d3b7d4b2dbb43a071d2b4f58512dbe10728e7cae9af008cdbe94a\": rpc error: code = NotFound desc = could not find container \"f0c6e8b2ea1d3b7d4b2dbb43a071d2b4f58512dbe10728e7cae9af008cdbe94a\": container with ID starting with f0c6e8b2ea1d3b7d4b2dbb43a071d2b4f58512dbe10728e7cae9af008cdbe94a not found: ID does not exist" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.668948 4796 scope.go:117] "RemoveContainer" containerID="719467e40937aaaab50d3c4a85b57847a7fead9e5df4e634da5107b12569b278" Dec 05 10:43:37 crc kubenswrapper[4796]: E1205 10:43:37.669191 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"719467e40937aaaab50d3c4a85b57847a7fead9e5df4e634da5107b12569b278\": container with ID starting with 719467e40937aaaab50d3c4a85b57847a7fead9e5df4e634da5107b12569b278 not found: ID does not exist" containerID="719467e40937aaaab50d3c4a85b57847a7fead9e5df4e634da5107b12569b278" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.669212 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"719467e40937aaaab50d3c4a85b57847a7fead9e5df4e634da5107b12569b278"} err="failed to get container status \"719467e40937aaaab50d3c4a85b57847a7fead9e5df4e634da5107b12569b278\": rpc error: code = NotFound desc = could not find container \"719467e40937aaaab50d3c4a85b57847a7fead9e5df4e634da5107b12569b278\": container with ID starting with 719467e40937aaaab50d3c4a85b57847a7fead9e5df4e634da5107b12569b278 not found: ID does not exist" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.669229 4796 scope.go:117] "RemoveContainer" containerID="c8f8e644ca0548a21a7b27a3e8cdf9491dcaced2185d01ef2e3715d4ebd397d6" Dec 05 10:43:37 crc kubenswrapper[4796]: E1205 10:43:37.669455 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8f8e644ca0548a21a7b27a3e8cdf9491dcaced2185d01ef2e3715d4ebd397d6\": container with ID starting with c8f8e644ca0548a21a7b27a3e8cdf9491dcaced2185d01ef2e3715d4ebd397d6 not found: ID does not exist" containerID="c8f8e644ca0548a21a7b27a3e8cdf9491dcaced2185d01ef2e3715d4ebd397d6" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.669479 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8f8e644ca0548a21a7b27a3e8cdf9491dcaced2185d01ef2e3715d4ebd397d6"} err="failed to get container status \"c8f8e644ca0548a21a7b27a3e8cdf9491dcaced2185d01ef2e3715d4ebd397d6\": rpc error: code = NotFound desc = could not find container \"c8f8e644ca0548a21a7b27a3e8cdf9491dcaced2185d01ef2e3715d4ebd397d6\": container with ID starting with c8f8e644ca0548a21a7b27a3e8cdf9491dcaced2185d01ef2e3715d4ebd397d6 not found: ID does not exist" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.669492 4796 scope.go:117] "RemoveContainer" containerID="19b6d96a5ed87ec3a1484aa9b53a2d7aae984bd8cbda0df45ad0fb79f09258ce" Dec 05 10:43:37 crc kubenswrapper[4796]: E1205 10:43:37.669913 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19b6d96a5ed87ec3a1484aa9b53a2d7aae984bd8cbda0df45ad0fb79f09258ce\": container with ID starting with 19b6d96a5ed87ec3a1484aa9b53a2d7aae984bd8cbda0df45ad0fb79f09258ce not found: ID does not exist" containerID="19b6d96a5ed87ec3a1484aa9b53a2d7aae984bd8cbda0df45ad0fb79f09258ce" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.669936 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19b6d96a5ed87ec3a1484aa9b53a2d7aae984bd8cbda0df45ad0fb79f09258ce"} err="failed to get container status \"19b6d96a5ed87ec3a1484aa9b53a2d7aae984bd8cbda0df45ad0fb79f09258ce\": rpc error: code = NotFound desc = could not find container \"19b6d96a5ed87ec3a1484aa9b53a2d7aae984bd8cbda0df45ad0fb79f09258ce\": container with ID starting with 19b6d96a5ed87ec3a1484aa9b53a2d7aae984bd8cbda0df45ad0fb79f09258ce not found: ID does not exist" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.758285 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.758385 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xdj4\" (UniqueName: \"kubernetes.io/projected/39979c8a-37dd-458c-a841-0c5d2d3d29d4-kube-api-access-2xdj4\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.758433 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39979c8a-37dd-458c-a841-0c5d2d3d29d4-log-httpd\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.758523 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39979c8a-37dd-458c-a841-0c5d2d3d29d4-run-httpd\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.758559 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-config-data\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.758971 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-scripts\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.760201 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.760269 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.861974 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.862042 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xdj4\" (UniqueName: \"kubernetes.io/projected/39979c8a-37dd-458c-a841-0c5d2d3d29d4-kube-api-access-2xdj4\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.862141 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39979c8a-37dd-458c-a841-0c5d2d3d29d4-log-httpd\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.862202 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39979c8a-37dd-458c-a841-0c5d2d3d29d4-run-httpd\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.862224 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-config-data\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.862287 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-scripts\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.862317 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.862365 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.863488 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39979c8a-37dd-458c-a841-0c5d2d3d29d4-run-httpd\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.863855 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39979c8a-37dd-458c-a841-0c5d2d3d29d4-log-httpd\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.867166 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.867173 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-scripts\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.869378 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.869408 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.869637 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-config-data\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.877862 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xdj4\" (UniqueName: \"kubernetes.io/projected/39979c8a-37dd-458c-a841-0c5d2d3d29d4-kube-api-access-2xdj4\") pod \"ceilometer-0\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " pod="openstack/ceilometer-0" Dec 05 10:43:37 crc kubenswrapper[4796]: I1205 10:43:37.949037 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 10:43:38 crc kubenswrapper[4796]: I1205 10:43:38.050334 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f547873-a935-4345-8617-42aa6e55b2bd" path="/var/lib/kubelet/pods/2f547873-a935-4345-8617-42aa6e55b2bd/volumes" Dec 05 10:43:38 crc kubenswrapper[4796]: I1205 10:43:38.369604 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:43:38 crc kubenswrapper[4796]: I1205 10:43:38.572185 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39979c8a-37dd-458c-a841-0c5d2d3d29d4","Type":"ContainerStarted","Data":"e8c83dd766e34842bf8a7c5da21d7294a534171469bd3fc319086fc17213a9e7"} Dec 05 10:43:39 crc kubenswrapper[4796]: I1205 10:43:39.597967 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39979c8a-37dd-458c-a841-0c5d2d3d29d4","Type":"ContainerStarted","Data":"7557c3c60650e5d457b364b714ce395b3b5115ce1b1ab00fb77bdc4dcaf1be12"} Dec 05 10:43:40 crc kubenswrapper[4796]: I1205 10:43:40.610525 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39979c8a-37dd-458c-a841-0c5d2d3d29d4","Type":"ContainerStarted","Data":"f7d8b28de3354d906a7e83e3c654ae8b3a4371221e31a6416a4e4212b38b2c3e"} Dec 05 10:43:40 crc kubenswrapper[4796]: I1205 10:43:40.611937 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39979c8a-37dd-458c-a841-0c5d2d3d29d4","Type":"ContainerStarted","Data":"a37c90ee2868906a0c0296a4928b9b87c449938af8cd985b03ac145f8eaba8fc"} Dec 05 10:43:42 crc kubenswrapper[4796]: I1205 10:43:42.631635 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39979c8a-37dd-458c-a841-0c5d2d3d29d4","Type":"ContainerStarted","Data":"ad9fa8ec4ec27f50f2771ea1c48e3918a5403ab2084d97cbbab1ba4487b5673b"} Dec 05 10:43:42 crc kubenswrapper[4796]: I1205 10:43:42.632398 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 10:43:42 crc kubenswrapper[4796]: I1205 10:43:42.652444 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.23383719 podStartE2EDuration="5.652427553s" podCreationTimestamp="2025-12-05 10:43:37 +0000 UTC" firstStartedPulling="2025-12-05 10:43:38.372275304 +0000 UTC m=+964.660380817" lastFinishedPulling="2025-12-05 10:43:41.790865666 +0000 UTC m=+968.078971180" observedRunningTime="2025-12-05 10:43:42.646548382 +0000 UTC m=+968.934653894" watchObservedRunningTime="2025-12-05 10:43:42.652427553 +0000 UTC m=+968.940533066" Dec 05 10:43:44 crc kubenswrapper[4796]: I1205 10:43:44.813168 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 05 10:43:44 crc kubenswrapper[4796]: I1205 10:43:44.813760 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 05 10:43:44 crc kubenswrapper[4796]: I1205 10:43:44.817594 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 05 10:43:44 crc kubenswrapper[4796]: I1205 10:43:44.818802 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 05 10:43:44 crc kubenswrapper[4796]: I1205 10:43:44.887227 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 05 10:43:45 crc kubenswrapper[4796]: I1205 10:43:45.904458 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 10:43:45 crc kubenswrapper[4796]: I1205 10:43:45.904967 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 10:43:45 crc kubenswrapper[4796]: I1205 10:43:45.905304 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 10:43:45 crc kubenswrapper[4796]: I1205 10:43:45.909075 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 10:43:46 crc kubenswrapper[4796]: I1205 10:43:46.661275 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 10:43:46 crc kubenswrapper[4796]: I1205 10:43:46.664305 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 10:43:46 crc kubenswrapper[4796]: I1205 10:43:46.805591 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9cbcb645-5ct6c"] Dec 05 10:43:46 crc kubenswrapper[4796]: I1205 10:43:46.808157 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:43:46 crc kubenswrapper[4796]: I1205 10:43:46.827900 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9cbcb645-5ct6c"] Dec 05 10:43:46 crc kubenswrapper[4796]: I1205 10:43:46.836575 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9cbcb645-5ct6c\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:43:46 crc kubenswrapper[4796]: I1205 10:43:46.836795 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9cbcb645-5ct6c\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:43:46 crc kubenswrapper[4796]: I1205 10:43:46.836948 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9cbcb645-5ct6c\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:43:46 crc kubenswrapper[4796]: I1205 10:43:46.837060 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-config\") pod \"dnsmasq-dns-5c9cbcb645-5ct6c\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:43:46 crc kubenswrapper[4796]: I1205 10:43:46.837246 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-dns-svc\") pod \"dnsmasq-dns-5c9cbcb645-5ct6c\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:43:46 crc kubenswrapper[4796]: I1205 10:43:46.837431 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ksjv\" (UniqueName: \"kubernetes.io/projected/1c478894-8932-435f-966f-73b440b0ddab-kube-api-access-5ksjv\") pod \"dnsmasq-dns-5c9cbcb645-5ct6c\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:43:46 crc kubenswrapper[4796]: I1205 10:43:46.940133 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9cbcb645-5ct6c\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:43:46 crc kubenswrapper[4796]: I1205 10:43:46.940203 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9cbcb645-5ct6c\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:43:46 crc kubenswrapper[4796]: I1205 10:43:46.940248 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9cbcb645-5ct6c\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:43:46 crc kubenswrapper[4796]: I1205 10:43:46.940286 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-config\") pod \"dnsmasq-dns-5c9cbcb645-5ct6c\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:43:46 crc kubenswrapper[4796]: I1205 10:43:46.940335 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-dns-svc\") pod \"dnsmasq-dns-5c9cbcb645-5ct6c\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:43:46 crc kubenswrapper[4796]: I1205 10:43:46.940411 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ksjv\" (UniqueName: \"kubernetes.io/projected/1c478894-8932-435f-966f-73b440b0ddab-kube-api-access-5ksjv\") pod \"dnsmasq-dns-5c9cbcb645-5ct6c\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:43:46 crc kubenswrapper[4796]: I1205 10:43:46.940938 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9cbcb645-5ct6c\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:43:46 crc kubenswrapper[4796]: I1205 10:43:46.941629 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-config\") pod \"dnsmasq-dns-5c9cbcb645-5ct6c\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:43:46 crc kubenswrapper[4796]: I1205 10:43:46.941654 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9cbcb645-5ct6c\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:43:46 crc kubenswrapper[4796]: I1205 10:43:46.941905 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-dns-svc\") pod \"dnsmasq-dns-5c9cbcb645-5ct6c\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:43:46 crc kubenswrapper[4796]: I1205 10:43:46.942283 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9cbcb645-5ct6c\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:43:46 crc kubenswrapper[4796]: I1205 10:43:46.963500 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ksjv\" (UniqueName: \"kubernetes.io/projected/1c478894-8932-435f-966f-73b440b0ddab-kube-api-access-5ksjv\") pod \"dnsmasq-dns-5c9cbcb645-5ct6c\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:43:47 crc kubenswrapper[4796]: I1205 10:43:47.135795 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:43:47 crc kubenswrapper[4796]: I1205 10:43:47.593603 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9cbcb645-5ct6c"] Dec 05 10:43:47 crc kubenswrapper[4796]: I1205 10:43:47.669002 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" event={"ID":"1c478894-8932-435f-966f-73b440b0ddab","Type":"ContainerStarted","Data":"8a0efd7e18854cef305f17f2eb3fc45d8b11c636b3b8e7a91380572b1e35271d"} Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.131764 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.132263 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="39979c8a-37dd-458c-a841-0c5d2d3d29d4" containerName="ceilometer-central-agent" containerID="cri-o://7557c3c60650e5d457b364b714ce395b3b5115ce1b1ab00fb77bdc4dcaf1be12" gracePeriod=30 Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.132363 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="39979c8a-37dd-458c-a841-0c5d2d3d29d4" containerName="proxy-httpd" containerID="cri-o://ad9fa8ec4ec27f50f2771ea1c48e3918a5403ab2084d97cbbab1ba4487b5673b" gracePeriod=30 Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.132378 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="39979c8a-37dd-458c-a841-0c5d2d3d29d4" containerName="sg-core" containerID="cri-o://f7d8b28de3354d906a7e83e3c654ae8b3a4371221e31a6416a4e4212b38b2c3e" gracePeriod=30 Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.132434 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="39979c8a-37dd-458c-a841-0c5d2d3d29d4" containerName="ceilometer-notification-agent" containerID="cri-o://a37c90ee2868906a0c0296a4928b9b87c449938af8cd985b03ac145f8eaba8fc" gracePeriod=30 Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.590513 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.674392 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b43aac5c-bb70-45de-99b4-6a992cf74202-config-data\") pod \"b43aac5c-bb70-45de-99b4-6a992cf74202\" (UID: \"b43aac5c-bb70-45de-99b4-6a992cf74202\") " Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.674480 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43aac5c-bb70-45de-99b4-6a992cf74202-combined-ca-bundle\") pod \"b43aac5c-bb70-45de-99b4-6a992cf74202\" (UID: \"b43aac5c-bb70-45de-99b4-6a992cf74202\") " Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.674603 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfzd5\" (UniqueName: \"kubernetes.io/projected/b43aac5c-bb70-45de-99b4-6a992cf74202-kube-api-access-lfzd5\") pod \"b43aac5c-bb70-45de-99b4-6a992cf74202\" (UID: \"b43aac5c-bb70-45de-99b4-6a992cf74202\") " Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.677905 4796 generic.go:334] "Generic (PLEG): container finished" podID="b43aac5c-bb70-45de-99b4-6a992cf74202" containerID="9fc9ced94089433dfd087388600c8c743fe4771d6e7d861046804a6d8813a457" exitCode=137 Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.677952 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b43aac5c-bb70-45de-99b4-6a992cf74202","Type":"ContainerDied","Data":"9fc9ced94089433dfd087388600c8c743fe4771d6e7d861046804a6d8813a457"} Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.677981 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.678013 4796 scope.go:117] "RemoveContainer" containerID="9fc9ced94089433dfd087388600c8c743fe4771d6e7d861046804a6d8813a457" Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.677998 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b43aac5c-bb70-45de-99b4-6a992cf74202","Type":"ContainerDied","Data":"b92499a199934d4d19ca5a4485c3edd22a9d51b7766bc1115692a7278ec8ec81"} Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.681914 4796 generic.go:334] "Generic (PLEG): container finished" podID="39979c8a-37dd-458c-a841-0c5d2d3d29d4" containerID="ad9fa8ec4ec27f50f2771ea1c48e3918a5403ab2084d97cbbab1ba4487b5673b" exitCode=0 Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.681936 4796 generic.go:334] "Generic (PLEG): container finished" podID="39979c8a-37dd-458c-a841-0c5d2d3d29d4" containerID="f7d8b28de3354d906a7e83e3c654ae8b3a4371221e31a6416a4e4212b38b2c3e" exitCode=2 Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.681945 4796 generic.go:334] "Generic (PLEG): container finished" podID="39979c8a-37dd-458c-a841-0c5d2d3d29d4" containerID="7557c3c60650e5d457b364b714ce395b3b5115ce1b1ab00fb77bdc4dcaf1be12" exitCode=0 Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.681986 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39979c8a-37dd-458c-a841-0c5d2d3d29d4","Type":"ContainerDied","Data":"ad9fa8ec4ec27f50f2771ea1c48e3918a5403ab2084d97cbbab1ba4487b5673b"} Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.682015 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39979c8a-37dd-458c-a841-0c5d2d3d29d4","Type":"ContainerDied","Data":"f7d8b28de3354d906a7e83e3c654ae8b3a4371221e31a6416a4e4212b38b2c3e"} Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.682033 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39979c8a-37dd-458c-a841-0c5d2d3d29d4","Type":"ContainerDied","Data":"7557c3c60650e5d457b364b714ce395b3b5115ce1b1ab00fb77bdc4dcaf1be12"} Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.682839 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b43aac5c-bb70-45de-99b4-6a992cf74202-kube-api-access-lfzd5" (OuterVolumeSpecName: "kube-api-access-lfzd5") pod "b43aac5c-bb70-45de-99b4-6a992cf74202" (UID: "b43aac5c-bb70-45de-99b4-6a992cf74202"). InnerVolumeSpecName "kube-api-access-lfzd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.683986 4796 generic.go:334] "Generic (PLEG): container finished" podID="1c478894-8932-435f-966f-73b440b0ddab" containerID="ea2913c389ea4e67c8d764bebea5d5668e7f9a17ac2f8c8e737c28942a7806b4" exitCode=0 Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.684066 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" event={"ID":"1c478894-8932-435f-966f-73b440b0ddab","Type":"ContainerDied","Data":"ea2913c389ea4e67c8d764bebea5d5668e7f9a17ac2f8c8e737c28942a7806b4"} Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.705572 4796 scope.go:117] "RemoveContainer" containerID="9fc9ced94089433dfd087388600c8c743fe4771d6e7d861046804a6d8813a457" Dec 05 10:43:48 crc kubenswrapper[4796]: E1205 10:43:48.705898 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fc9ced94089433dfd087388600c8c743fe4771d6e7d861046804a6d8813a457\": container with ID starting with 9fc9ced94089433dfd087388600c8c743fe4771d6e7d861046804a6d8813a457 not found: ID does not exist" containerID="9fc9ced94089433dfd087388600c8c743fe4771d6e7d861046804a6d8813a457" Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.705948 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fc9ced94089433dfd087388600c8c743fe4771d6e7d861046804a6d8813a457"} err="failed to get container status \"9fc9ced94089433dfd087388600c8c743fe4771d6e7d861046804a6d8813a457\": rpc error: code = NotFound desc = could not find container \"9fc9ced94089433dfd087388600c8c743fe4771d6e7d861046804a6d8813a457\": container with ID starting with 9fc9ced94089433dfd087388600c8c743fe4771d6e7d861046804a6d8813a457 not found: ID does not exist" Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.709124 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43aac5c-bb70-45de-99b4-6a992cf74202-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b43aac5c-bb70-45de-99b4-6a992cf74202" (UID: "b43aac5c-bb70-45de-99b4-6a992cf74202"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.712699 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b43aac5c-bb70-45de-99b4-6a992cf74202-config-data" (OuterVolumeSpecName: "config-data") pod "b43aac5c-bb70-45de-99b4-6a992cf74202" (UID: "b43aac5c-bb70-45de-99b4-6a992cf74202"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.777771 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b43aac5c-bb70-45de-99b4-6a992cf74202-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.777810 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b43aac5c-bb70-45de-99b4-6a992cf74202-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.777842 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfzd5\" (UniqueName: \"kubernetes.io/projected/b43aac5c-bb70-45de-99b4-6a992cf74202-kube-api-access-lfzd5\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:48 crc kubenswrapper[4796]: I1205 10:43:48.903994 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.030635 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.039055 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.053348 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 10:43:49 crc kubenswrapper[4796]: E1205 10:43:49.054028 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b43aac5c-bb70-45de-99b4-6a992cf74202" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.054049 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b43aac5c-bb70-45de-99b4-6a992cf74202" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.054270 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b43aac5c-bb70-45de-99b4-6a992cf74202" containerName="nova-cell1-novncproxy-novncproxy" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.055161 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.060519 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.060839 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.061567 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.067270 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.189956 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d012a13-a56f-40e9-9689-43405f7c5cfd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d012a13-a56f-40e9-9689-43405f7c5cfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.190002 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d012a13-a56f-40e9-9689-43405f7c5cfd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d012a13-a56f-40e9-9689-43405f7c5cfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.190073 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d012a13-a56f-40e9-9689-43405f7c5cfd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d012a13-a56f-40e9-9689-43405f7c5cfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.190631 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkmms\" (UniqueName: \"kubernetes.io/projected/6d012a13-a56f-40e9-9689-43405f7c5cfd-kube-api-access-zkmms\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d012a13-a56f-40e9-9689-43405f7c5cfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.190869 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d012a13-a56f-40e9-9689-43405f7c5cfd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d012a13-a56f-40e9-9689-43405f7c5cfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.293978 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d012a13-a56f-40e9-9689-43405f7c5cfd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d012a13-a56f-40e9-9689-43405f7c5cfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.294034 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d012a13-a56f-40e9-9689-43405f7c5cfd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d012a13-a56f-40e9-9689-43405f7c5cfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.294083 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d012a13-a56f-40e9-9689-43405f7c5cfd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d012a13-a56f-40e9-9689-43405f7c5cfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.294203 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkmms\" (UniqueName: \"kubernetes.io/projected/6d012a13-a56f-40e9-9689-43405f7c5cfd-kube-api-access-zkmms\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d012a13-a56f-40e9-9689-43405f7c5cfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.294248 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d012a13-a56f-40e9-9689-43405f7c5cfd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d012a13-a56f-40e9-9689-43405f7c5cfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.300768 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d012a13-a56f-40e9-9689-43405f7c5cfd-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d012a13-a56f-40e9-9689-43405f7c5cfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.301088 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d012a13-a56f-40e9-9689-43405f7c5cfd-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d012a13-a56f-40e9-9689-43405f7c5cfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.301439 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d012a13-a56f-40e9-9689-43405f7c5cfd-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d012a13-a56f-40e9-9689-43405f7c5cfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.316573 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d012a13-a56f-40e9-9689-43405f7c5cfd-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d012a13-a56f-40e9-9689-43405f7c5cfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.319135 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkmms\" (UniqueName: \"kubernetes.io/projected/6d012a13-a56f-40e9-9689-43405f7c5cfd-kube-api-access-zkmms\") pod \"nova-cell1-novncproxy-0\" (UID: \"6d012a13-a56f-40e9-9689-43405f7c5cfd\") " pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.369599 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.695984 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" event={"ID":"1c478894-8932-435f-966f-73b440b0ddab","Type":"ContainerStarted","Data":"4dd558df598e8d6e4b63799374c39052065e771f89a0cfa2c4e5187eb4fdd2a9"} Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.696669 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.697106 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0aae40e3-76ad-42ac-bf8a-1d05e9074798" containerName="nova-api-log" containerID="cri-o://b895595f1eacec3fee5653f312dd0bb192227d4ced7afc4beb0190bec6fb448f" gracePeriod=30 Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.697177 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0aae40e3-76ad-42ac-bf8a-1d05e9074798" containerName="nova-api-api" containerID="cri-o://7466ec57d05f0897c196d7ce17add0c4721d613a2e3283299f3f7fb464b51c0d" gracePeriod=30 Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.718063 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" podStartSLOduration=3.718053108 podStartE2EDuration="3.718053108s" podCreationTimestamp="2025-12-05 10:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:43:49.710580922 +0000 UTC m=+975.998686435" watchObservedRunningTime="2025-12-05 10:43:49.718053108 +0000 UTC m=+976.006158622" Dec 05 10:43:49 crc kubenswrapper[4796]: I1205 10:43:49.783462 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.043891 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b43aac5c-bb70-45de-99b4-6a992cf74202" path="/var/lib/kubelet/pods/b43aac5c-bb70-45de-99b4-6a992cf74202/volumes" Dec 05 10:43:50 crc kubenswrapper[4796]: E1205 10:43:50.251609 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39979c8a_37dd_458c_a841_0c5d2d3d29d4.slice/crio-e8c83dd766e34842bf8a7c5da21d7294a534171469bd3fc319086fc17213a9e7\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd25dde1a_1eba_4b84_b637_134daea7451e.slice\": RecentStats: unable to find data in memory cache]" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.411623 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.521915 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39979c8a-37dd-458c-a841-0c5d2d3d29d4-log-httpd\") pod \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.522137 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-ceilometer-tls-certs\") pod \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.522190 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-config-data\") pod \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.522212 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-sg-core-conf-yaml\") pod \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.522294 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-scripts\") pod \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.522342 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39979c8a-37dd-458c-a841-0c5d2d3d29d4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "39979c8a-37dd-458c-a841-0c5d2d3d29d4" (UID: "39979c8a-37dd-458c-a841-0c5d2d3d29d4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.522368 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-combined-ca-bundle\") pod \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.522451 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39979c8a-37dd-458c-a841-0c5d2d3d29d4-run-httpd\") pod \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.522546 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xdj4\" (UniqueName: \"kubernetes.io/projected/39979c8a-37dd-458c-a841-0c5d2d3d29d4-kube-api-access-2xdj4\") pod \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.522956 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39979c8a-37dd-458c-a841-0c5d2d3d29d4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "39979c8a-37dd-458c-a841-0c5d2d3d29d4" (UID: "39979c8a-37dd-458c-a841-0c5d2d3d29d4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.523225 4796 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39979c8a-37dd-458c-a841-0c5d2d3d29d4-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.523247 4796 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/39979c8a-37dd-458c-a841-0c5d2d3d29d4-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.528638 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-scripts" (OuterVolumeSpecName: "scripts") pod "39979c8a-37dd-458c-a841-0c5d2d3d29d4" (UID: "39979c8a-37dd-458c-a841-0c5d2d3d29d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.528844 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39979c8a-37dd-458c-a841-0c5d2d3d29d4-kube-api-access-2xdj4" (OuterVolumeSpecName: "kube-api-access-2xdj4") pod "39979c8a-37dd-458c-a841-0c5d2d3d29d4" (UID: "39979c8a-37dd-458c-a841-0c5d2d3d29d4"). InnerVolumeSpecName "kube-api-access-2xdj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.548314 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "39979c8a-37dd-458c-a841-0c5d2d3d29d4" (UID: "39979c8a-37dd-458c-a841-0c5d2d3d29d4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.577236 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "39979c8a-37dd-458c-a841-0c5d2d3d29d4" (UID: "39979c8a-37dd-458c-a841-0c5d2d3d29d4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:50 crc kubenswrapper[4796]: E1205 10:43:50.600652 4796 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-config-data podName:39979c8a-37dd-458c-a841-0c5d2d3d29d4 nodeName:}" failed. No retries permitted until 2025-12-05 10:43:51.100624693 +0000 UTC m=+977.388730207 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-config-data") pod "39979c8a-37dd-458c-a841-0c5d2d3d29d4" (UID: "39979c8a-37dd-458c-a841-0c5d2d3d29d4") : error deleting /var/lib/kubelet/pods/39979c8a-37dd-458c-a841-0c5d2d3d29d4/volume-subpaths: remove /var/lib/kubelet/pods/39979c8a-37dd-458c-a841-0c5d2d3d29d4/volume-subpaths: no such file or directory Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.602558 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39979c8a-37dd-458c-a841-0c5d2d3d29d4" (UID: "39979c8a-37dd-458c-a841-0c5d2d3d29d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.625961 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xdj4\" (UniqueName: \"kubernetes.io/projected/39979c8a-37dd-458c-a841-0c5d2d3d29d4-kube-api-access-2xdj4\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.625989 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.626000 4796 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.626009 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.626020 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.719911 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6d012a13-a56f-40e9-9689-43405f7c5cfd","Type":"ContainerStarted","Data":"0579ac91a2607e147287b9d5d539e2a79f675367c2f48d8ed518b470a5684a5c"} Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.719962 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6d012a13-a56f-40e9-9689-43405f7c5cfd","Type":"ContainerStarted","Data":"8d1e9161af8252c6373b2c16f940faf7a2c56d3337ae5e861b07d666b8d093fb"} Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.726057 4796 generic.go:334] "Generic (PLEG): container finished" podID="0aae40e3-76ad-42ac-bf8a-1d05e9074798" containerID="b895595f1eacec3fee5653f312dd0bb192227d4ced7afc4beb0190bec6fb448f" exitCode=143 Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.726113 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0aae40e3-76ad-42ac-bf8a-1d05e9074798","Type":"ContainerDied","Data":"b895595f1eacec3fee5653f312dd0bb192227d4ced7afc4beb0190bec6fb448f"} Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.732880 4796 generic.go:334] "Generic (PLEG): container finished" podID="39979c8a-37dd-458c-a841-0c5d2d3d29d4" containerID="a37c90ee2868906a0c0296a4928b9b87c449938af8cd985b03ac145f8eaba8fc" exitCode=0 Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.733463 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.733676 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39979c8a-37dd-458c-a841-0c5d2d3d29d4","Type":"ContainerDied","Data":"a37c90ee2868906a0c0296a4928b9b87c449938af8cd985b03ac145f8eaba8fc"} Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.733714 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"39979c8a-37dd-458c-a841-0c5d2d3d29d4","Type":"ContainerDied","Data":"e8c83dd766e34842bf8a7c5da21d7294a534171469bd3fc319086fc17213a9e7"} Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.733732 4796 scope.go:117] "RemoveContainer" containerID="ad9fa8ec4ec27f50f2771ea1c48e3918a5403ab2084d97cbbab1ba4487b5673b" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.735406 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.735397248 podStartE2EDuration="1.735397248s" podCreationTimestamp="2025-12-05 10:43:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:43:50.734287269 +0000 UTC m=+977.022392783" watchObservedRunningTime="2025-12-05 10:43:50.735397248 +0000 UTC m=+977.023502761" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.756594 4796 scope.go:117] "RemoveContainer" containerID="f7d8b28de3354d906a7e83e3c654ae8b3a4371221e31a6416a4e4212b38b2c3e" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.778924 4796 scope.go:117] "RemoveContainer" containerID="a37c90ee2868906a0c0296a4928b9b87c449938af8cd985b03ac145f8eaba8fc" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.798524 4796 scope.go:117] "RemoveContainer" containerID="7557c3c60650e5d457b364b714ce395b3b5115ce1b1ab00fb77bdc4dcaf1be12" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.819839 4796 scope.go:117] "RemoveContainer" containerID="ad9fa8ec4ec27f50f2771ea1c48e3918a5403ab2084d97cbbab1ba4487b5673b" Dec 05 10:43:50 crc kubenswrapper[4796]: E1205 10:43:50.820411 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad9fa8ec4ec27f50f2771ea1c48e3918a5403ab2084d97cbbab1ba4487b5673b\": container with ID starting with ad9fa8ec4ec27f50f2771ea1c48e3918a5403ab2084d97cbbab1ba4487b5673b not found: ID does not exist" containerID="ad9fa8ec4ec27f50f2771ea1c48e3918a5403ab2084d97cbbab1ba4487b5673b" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.820448 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad9fa8ec4ec27f50f2771ea1c48e3918a5403ab2084d97cbbab1ba4487b5673b"} err="failed to get container status \"ad9fa8ec4ec27f50f2771ea1c48e3918a5403ab2084d97cbbab1ba4487b5673b\": rpc error: code = NotFound desc = could not find container \"ad9fa8ec4ec27f50f2771ea1c48e3918a5403ab2084d97cbbab1ba4487b5673b\": container with ID starting with ad9fa8ec4ec27f50f2771ea1c48e3918a5403ab2084d97cbbab1ba4487b5673b not found: ID does not exist" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.820474 4796 scope.go:117] "RemoveContainer" containerID="f7d8b28de3354d906a7e83e3c654ae8b3a4371221e31a6416a4e4212b38b2c3e" Dec 05 10:43:50 crc kubenswrapper[4796]: E1205 10:43:50.820901 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7d8b28de3354d906a7e83e3c654ae8b3a4371221e31a6416a4e4212b38b2c3e\": container with ID starting with f7d8b28de3354d906a7e83e3c654ae8b3a4371221e31a6416a4e4212b38b2c3e not found: ID does not exist" containerID="f7d8b28de3354d906a7e83e3c654ae8b3a4371221e31a6416a4e4212b38b2c3e" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.821028 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7d8b28de3354d906a7e83e3c654ae8b3a4371221e31a6416a4e4212b38b2c3e"} err="failed to get container status \"f7d8b28de3354d906a7e83e3c654ae8b3a4371221e31a6416a4e4212b38b2c3e\": rpc error: code = NotFound desc = could not find container \"f7d8b28de3354d906a7e83e3c654ae8b3a4371221e31a6416a4e4212b38b2c3e\": container with ID starting with f7d8b28de3354d906a7e83e3c654ae8b3a4371221e31a6416a4e4212b38b2c3e not found: ID does not exist" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.821061 4796 scope.go:117] "RemoveContainer" containerID="a37c90ee2868906a0c0296a4928b9b87c449938af8cd985b03ac145f8eaba8fc" Dec 05 10:43:50 crc kubenswrapper[4796]: E1205 10:43:50.821450 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a37c90ee2868906a0c0296a4928b9b87c449938af8cd985b03ac145f8eaba8fc\": container with ID starting with a37c90ee2868906a0c0296a4928b9b87c449938af8cd985b03ac145f8eaba8fc not found: ID does not exist" containerID="a37c90ee2868906a0c0296a4928b9b87c449938af8cd985b03ac145f8eaba8fc" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.821491 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a37c90ee2868906a0c0296a4928b9b87c449938af8cd985b03ac145f8eaba8fc"} err="failed to get container status \"a37c90ee2868906a0c0296a4928b9b87c449938af8cd985b03ac145f8eaba8fc\": rpc error: code = NotFound desc = could not find container \"a37c90ee2868906a0c0296a4928b9b87c449938af8cd985b03ac145f8eaba8fc\": container with ID starting with a37c90ee2868906a0c0296a4928b9b87c449938af8cd985b03ac145f8eaba8fc not found: ID does not exist" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.821520 4796 scope.go:117] "RemoveContainer" containerID="7557c3c60650e5d457b364b714ce395b3b5115ce1b1ab00fb77bdc4dcaf1be12" Dec 05 10:43:50 crc kubenswrapper[4796]: E1205 10:43:50.821891 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7557c3c60650e5d457b364b714ce395b3b5115ce1b1ab00fb77bdc4dcaf1be12\": container with ID starting with 7557c3c60650e5d457b364b714ce395b3b5115ce1b1ab00fb77bdc4dcaf1be12 not found: ID does not exist" containerID="7557c3c60650e5d457b364b714ce395b3b5115ce1b1ab00fb77bdc4dcaf1be12" Dec 05 10:43:50 crc kubenswrapper[4796]: I1205 10:43:50.821921 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7557c3c60650e5d457b364b714ce395b3b5115ce1b1ab00fb77bdc4dcaf1be12"} err="failed to get container status \"7557c3c60650e5d457b364b714ce395b3b5115ce1b1ab00fb77bdc4dcaf1be12\": rpc error: code = NotFound desc = could not find container \"7557c3c60650e5d457b364b714ce395b3b5115ce1b1ab00fb77bdc4dcaf1be12\": container with ID starting with 7557c3c60650e5d457b364b714ce395b3b5115ce1b1ab00fb77bdc4dcaf1be12 not found: ID does not exist" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.137572 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-config-data\") pod \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\" (UID: \"39979c8a-37dd-458c-a841-0c5d2d3d29d4\") " Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.142240 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-config-data" (OuterVolumeSpecName: "config-data") pod "39979c8a-37dd-458c-a841-0c5d2d3d29d4" (UID: "39979c8a-37dd-458c-a841-0c5d2d3d29d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.239513 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39979c8a-37dd-458c-a841-0c5d2d3d29d4-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.365379 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.371561 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.387669 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:43:51 crc kubenswrapper[4796]: E1205 10:43:51.388062 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39979c8a-37dd-458c-a841-0c5d2d3d29d4" containerName="proxy-httpd" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.388081 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="39979c8a-37dd-458c-a841-0c5d2d3d29d4" containerName="proxy-httpd" Dec 05 10:43:51 crc kubenswrapper[4796]: E1205 10:43:51.388094 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39979c8a-37dd-458c-a841-0c5d2d3d29d4" containerName="sg-core" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.388102 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="39979c8a-37dd-458c-a841-0c5d2d3d29d4" containerName="sg-core" Dec 05 10:43:51 crc kubenswrapper[4796]: E1205 10:43:51.388121 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39979c8a-37dd-458c-a841-0c5d2d3d29d4" containerName="ceilometer-notification-agent" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.388127 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="39979c8a-37dd-458c-a841-0c5d2d3d29d4" containerName="ceilometer-notification-agent" Dec 05 10:43:51 crc kubenswrapper[4796]: E1205 10:43:51.388136 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39979c8a-37dd-458c-a841-0c5d2d3d29d4" containerName="ceilometer-central-agent" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.388142 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="39979c8a-37dd-458c-a841-0c5d2d3d29d4" containerName="ceilometer-central-agent" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.388312 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="39979c8a-37dd-458c-a841-0c5d2d3d29d4" containerName="ceilometer-central-agent" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.388323 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="39979c8a-37dd-458c-a841-0c5d2d3d29d4" containerName="sg-core" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.388337 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="39979c8a-37dd-458c-a841-0c5d2d3d29d4" containerName="ceilometer-notification-agent" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.388347 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="39979c8a-37dd-458c-a841-0c5d2d3d29d4" containerName="proxy-httpd" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.390068 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.393476 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.393606 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.393748 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.403864 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.443854 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea696700-f56d-4ca9-a810-410a2061a80e-log-httpd\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.444048 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea696700-f56d-4ca9-a810-410a2061a80e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.444151 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea696700-f56d-4ca9-a810-410a2061a80e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.444433 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzv76\" (UniqueName: \"kubernetes.io/projected/ea696700-f56d-4ca9-a810-410a2061a80e-kube-api-access-nzv76\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.444501 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea696700-f56d-4ca9-a810-410a2061a80e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.444666 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea696700-f56d-4ca9-a810-410a2061a80e-config-data\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.444820 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea696700-f56d-4ca9-a810-410a2061a80e-run-httpd\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.444924 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea696700-f56d-4ca9-a810-410a2061a80e-scripts\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.545573 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzv76\" (UniqueName: \"kubernetes.io/projected/ea696700-f56d-4ca9-a810-410a2061a80e-kube-api-access-nzv76\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.545610 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea696700-f56d-4ca9-a810-410a2061a80e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.545649 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea696700-f56d-4ca9-a810-410a2061a80e-config-data\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.545705 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea696700-f56d-4ca9-a810-410a2061a80e-run-httpd\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.545739 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea696700-f56d-4ca9-a810-410a2061a80e-scripts\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.545773 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea696700-f56d-4ca9-a810-410a2061a80e-log-httpd\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.545803 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea696700-f56d-4ca9-a810-410a2061a80e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.545821 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea696700-f56d-4ca9-a810-410a2061a80e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.546237 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea696700-f56d-4ca9-a810-410a2061a80e-log-httpd\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.546244 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea696700-f56d-4ca9-a810-410a2061a80e-run-httpd\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.548999 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea696700-f56d-4ca9-a810-410a2061a80e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.549004 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea696700-f56d-4ca9-a810-410a2061a80e-config-data\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.549662 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea696700-f56d-4ca9-a810-410a2061a80e-scripts\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.550138 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea696700-f56d-4ca9-a810-410a2061a80e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.553477 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea696700-f56d-4ca9-a810-410a2061a80e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.558999 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzv76\" (UniqueName: \"kubernetes.io/projected/ea696700-f56d-4ca9-a810-410a2061a80e-kube-api-access-nzv76\") pod \"ceilometer-0\" (UID: \"ea696700-f56d-4ca9-a810-410a2061a80e\") " pod="openstack/ceilometer-0" Dec 05 10:43:51 crc kubenswrapper[4796]: I1205 10:43:51.707631 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 05 10:43:52 crc kubenswrapper[4796]: I1205 10:43:52.040467 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39979c8a-37dd-458c-a841-0c5d2d3d29d4" path="/var/lib/kubelet/pods/39979c8a-37dd-458c-a841-0c5d2d3d29d4/volumes" Dec 05 10:43:52 crc kubenswrapper[4796]: W1205 10:43:52.092392 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea696700_f56d_4ca9_a810_410a2061a80e.slice/crio-6f41cd89526b8a69223a2b40c66114c56ac631b6ab77074b719da661e73fe577 WatchSource:0}: Error finding container 6f41cd89526b8a69223a2b40c66114c56ac631b6ab77074b719da661e73fe577: Status 404 returned error can't find the container with id 6f41cd89526b8a69223a2b40c66114c56ac631b6ab77074b719da661e73fe577 Dec 05 10:43:52 crc kubenswrapper[4796]: I1205 10:43:52.092859 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 05 10:43:52 crc kubenswrapper[4796]: I1205 10:43:52.768610 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea696700-f56d-4ca9-a810-410a2061a80e","Type":"ContainerStarted","Data":"6f41cd89526b8a69223a2b40c66114c56ac631b6ab77074b719da661e73fe577"} Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.226105 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.385814 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aae40e3-76ad-42ac-bf8a-1d05e9074798-combined-ca-bundle\") pod \"0aae40e3-76ad-42ac-bf8a-1d05e9074798\" (UID: \"0aae40e3-76ad-42ac-bf8a-1d05e9074798\") " Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.386057 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aae40e3-76ad-42ac-bf8a-1d05e9074798-logs\") pod \"0aae40e3-76ad-42ac-bf8a-1d05e9074798\" (UID: \"0aae40e3-76ad-42ac-bf8a-1d05e9074798\") " Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.386356 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aae40e3-76ad-42ac-bf8a-1d05e9074798-config-data\") pod \"0aae40e3-76ad-42ac-bf8a-1d05e9074798\" (UID: \"0aae40e3-76ad-42ac-bf8a-1d05e9074798\") " Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.386557 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pldbm\" (UniqueName: \"kubernetes.io/projected/0aae40e3-76ad-42ac-bf8a-1d05e9074798-kube-api-access-pldbm\") pod \"0aae40e3-76ad-42ac-bf8a-1d05e9074798\" (UID: \"0aae40e3-76ad-42ac-bf8a-1d05e9074798\") " Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.386781 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0aae40e3-76ad-42ac-bf8a-1d05e9074798-logs" (OuterVolumeSpecName: "logs") pod "0aae40e3-76ad-42ac-bf8a-1d05e9074798" (UID: "0aae40e3-76ad-42ac-bf8a-1d05e9074798"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.388006 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0aae40e3-76ad-42ac-bf8a-1d05e9074798-logs\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.391657 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aae40e3-76ad-42ac-bf8a-1d05e9074798-kube-api-access-pldbm" (OuterVolumeSpecName: "kube-api-access-pldbm") pod "0aae40e3-76ad-42ac-bf8a-1d05e9074798" (UID: "0aae40e3-76ad-42ac-bf8a-1d05e9074798"). InnerVolumeSpecName "kube-api-access-pldbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.411116 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aae40e3-76ad-42ac-bf8a-1d05e9074798-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0aae40e3-76ad-42ac-bf8a-1d05e9074798" (UID: "0aae40e3-76ad-42ac-bf8a-1d05e9074798"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.411516 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aae40e3-76ad-42ac-bf8a-1d05e9074798-config-data" (OuterVolumeSpecName: "config-data") pod "0aae40e3-76ad-42ac-bf8a-1d05e9074798" (UID: "0aae40e3-76ad-42ac-bf8a-1d05e9074798"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.489463 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pldbm\" (UniqueName: \"kubernetes.io/projected/0aae40e3-76ad-42ac-bf8a-1d05e9074798-kube-api-access-pldbm\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.489488 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aae40e3-76ad-42ac-bf8a-1d05e9074798-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.489498 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0aae40e3-76ad-42ac-bf8a-1d05e9074798-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.780341 4796 generic.go:334] "Generic (PLEG): container finished" podID="0aae40e3-76ad-42ac-bf8a-1d05e9074798" containerID="7466ec57d05f0897c196d7ce17add0c4721d613a2e3283299f3f7fb464b51c0d" exitCode=0 Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.780417 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0aae40e3-76ad-42ac-bf8a-1d05e9074798","Type":"ContainerDied","Data":"7466ec57d05f0897c196d7ce17add0c4721d613a2e3283299f3f7fb464b51c0d"} Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.780731 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0aae40e3-76ad-42ac-bf8a-1d05e9074798","Type":"ContainerDied","Data":"0656bb57b04338402146da7944052bbb71c0ff6e66d5e94a25c298d90155e3c7"} Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.780755 4796 scope.go:117] "RemoveContainer" containerID="7466ec57d05f0897c196d7ce17add0c4721d613a2e3283299f3f7fb464b51c0d" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.780442 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.783569 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea696700-f56d-4ca9-a810-410a2061a80e","Type":"ContainerStarted","Data":"0bad3de0b185d126fe2ea13cfe3d84db287363d632ed4cf028d2e5debe29ad54"} Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.783617 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea696700-f56d-4ca9-a810-410a2061a80e","Type":"ContainerStarted","Data":"4513e82ee6b3d94b4a3fe15c720bd34157dc6537b70abd6daca60ade6f0e9ce8"} Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.806116 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.808285 4796 scope.go:117] "RemoveContainer" containerID="b895595f1eacec3fee5653f312dd0bb192227d4ced7afc4beb0190bec6fb448f" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.814660 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.835193 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 10:43:53 crc kubenswrapper[4796]: E1205 10:43:53.835535 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aae40e3-76ad-42ac-bf8a-1d05e9074798" containerName="nova-api-log" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.835552 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aae40e3-76ad-42ac-bf8a-1d05e9074798" containerName="nova-api-log" Dec 05 10:43:53 crc kubenswrapper[4796]: E1205 10:43:53.835584 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aae40e3-76ad-42ac-bf8a-1d05e9074798" containerName="nova-api-api" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.835590 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aae40e3-76ad-42ac-bf8a-1d05e9074798" containerName="nova-api-api" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.835772 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aae40e3-76ad-42ac-bf8a-1d05e9074798" containerName="nova-api-log" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.835789 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aae40e3-76ad-42ac-bf8a-1d05e9074798" containerName="nova-api-api" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.836659 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.837193 4796 scope.go:117] "RemoveContainer" containerID="7466ec57d05f0897c196d7ce17add0c4721d613a2e3283299f3f7fb464b51c0d" Dec 05 10:43:53 crc kubenswrapper[4796]: E1205 10:43:53.837779 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7466ec57d05f0897c196d7ce17add0c4721d613a2e3283299f3f7fb464b51c0d\": container with ID starting with 7466ec57d05f0897c196d7ce17add0c4721d613a2e3283299f3f7fb464b51c0d not found: ID does not exist" containerID="7466ec57d05f0897c196d7ce17add0c4721d613a2e3283299f3f7fb464b51c0d" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.837828 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7466ec57d05f0897c196d7ce17add0c4721d613a2e3283299f3f7fb464b51c0d"} err="failed to get container status \"7466ec57d05f0897c196d7ce17add0c4721d613a2e3283299f3f7fb464b51c0d\": rpc error: code = NotFound desc = could not find container \"7466ec57d05f0897c196d7ce17add0c4721d613a2e3283299f3f7fb464b51c0d\": container with ID starting with 7466ec57d05f0897c196d7ce17add0c4721d613a2e3283299f3f7fb464b51c0d not found: ID does not exist" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.837861 4796 scope.go:117] "RemoveContainer" containerID="b895595f1eacec3fee5653f312dd0bb192227d4ced7afc4beb0190bec6fb448f" Dec 05 10:43:53 crc kubenswrapper[4796]: E1205 10:43:53.838199 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b895595f1eacec3fee5653f312dd0bb192227d4ced7afc4beb0190bec6fb448f\": container with ID starting with b895595f1eacec3fee5653f312dd0bb192227d4ced7afc4beb0190bec6fb448f not found: ID does not exist" containerID="b895595f1eacec3fee5653f312dd0bb192227d4ced7afc4beb0190bec6fb448f" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.838237 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b895595f1eacec3fee5653f312dd0bb192227d4ced7afc4beb0190bec6fb448f"} err="failed to get container status \"b895595f1eacec3fee5653f312dd0bb192227d4ced7afc4beb0190bec6fb448f\": rpc error: code = NotFound desc = could not find container \"b895595f1eacec3fee5653f312dd0bb192227d4ced7afc4beb0190bec6fb448f\": container with ID starting with b895595f1eacec3fee5653f312dd0bb192227d4ced7afc4beb0190bec6fb448f not found: ID does not exist" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.839089 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.839556 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.839994 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.846125 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.923608 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-public-tls-certs\") pod \"nova-api-0\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " pod="openstack/nova-api-0" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.923890 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61509801-e0f4-4b5e-ac89-725b32c0fb42-logs\") pod \"nova-api-0\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " pod="openstack/nova-api-0" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.923919 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " pod="openstack/nova-api-0" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.923946 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrnz5\" (UniqueName: \"kubernetes.io/projected/61509801-e0f4-4b5e-ac89-725b32c0fb42-kube-api-access-lrnz5\") pod \"nova-api-0\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " pod="openstack/nova-api-0" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.924034 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-config-data\") pod \"nova-api-0\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " pod="openstack/nova-api-0" Dec 05 10:43:53 crc kubenswrapper[4796]: I1205 10:43:53.924066 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-internal-tls-certs\") pod \"nova-api-0\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " pod="openstack/nova-api-0" Dec 05 10:43:54 crc kubenswrapper[4796]: I1205 10:43:54.026229 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-config-data\") pod \"nova-api-0\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " pod="openstack/nova-api-0" Dec 05 10:43:54 crc kubenswrapper[4796]: I1205 10:43:54.026300 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-internal-tls-certs\") pod \"nova-api-0\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " pod="openstack/nova-api-0" Dec 05 10:43:54 crc kubenswrapper[4796]: I1205 10:43:54.026494 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-public-tls-certs\") pod \"nova-api-0\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " pod="openstack/nova-api-0" Dec 05 10:43:54 crc kubenswrapper[4796]: I1205 10:43:54.026937 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61509801-e0f4-4b5e-ac89-725b32c0fb42-logs\") pod \"nova-api-0\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " pod="openstack/nova-api-0" Dec 05 10:43:54 crc kubenswrapper[4796]: I1205 10:43:54.026965 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " pod="openstack/nova-api-0" Dec 05 10:43:54 crc kubenswrapper[4796]: I1205 10:43:54.026995 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrnz5\" (UniqueName: \"kubernetes.io/projected/61509801-e0f4-4b5e-ac89-725b32c0fb42-kube-api-access-lrnz5\") pod \"nova-api-0\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " pod="openstack/nova-api-0" Dec 05 10:43:54 crc kubenswrapper[4796]: I1205 10:43:54.027491 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61509801-e0f4-4b5e-ac89-725b32c0fb42-logs\") pod \"nova-api-0\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " pod="openstack/nova-api-0" Dec 05 10:43:54 crc kubenswrapper[4796]: I1205 10:43:54.029838 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-config-data\") pod \"nova-api-0\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " pod="openstack/nova-api-0" Dec 05 10:43:54 crc kubenswrapper[4796]: I1205 10:43:54.029953 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-public-tls-certs\") pod \"nova-api-0\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " pod="openstack/nova-api-0" Dec 05 10:43:54 crc kubenswrapper[4796]: I1205 10:43:54.030446 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " pod="openstack/nova-api-0" Dec 05 10:43:54 crc kubenswrapper[4796]: I1205 10:43:54.032271 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-internal-tls-certs\") pod \"nova-api-0\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " pod="openstack/nova-api-0" Dec 05 10:43:54 crc kubenswrapper[4796]: I1205 10:43:54.044938 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrnz5\" (UniqueName: \"kubernetes.io/projected/61509801-e0f4-4b5e-ac89-725b32c0fb42-kube-api-access-lrnz5\") pod \"nova-api-0\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " pod="openstack/nova-api-0" Dec 05 10:43:54 crc kubenswrapper[4796]: I1205 10:43:54.045595 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aae40e3-76ad-42ac-bf8a-1d05e9074798" path="/var/lib/kubelet/pods/0aae40e3-76ad-42ac-bf8a-1d05e9074798/volumes" Dec 05 10:43:54 crc kubenswrapper[4796]: I1205 10:43:54.155428 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 10:43:54 crc kubenswrapper[4796]: I1205 10:43:54.370075 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:54 crc kubenswrapper[4796]: I1205 10:43:54.571282 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 10:43:54 crc kubenswrapper[4796]: W1205 10:43:54.579827 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61509801_e0f4_4b5e_ac89_725b32c0fb42.slice/crio-70f7bb6b1e1213c1cc2e285e47c061af50712c69cc905dec57f4ae047193c325 WatchSource:0}: Error finding container 70f7bb6b1e1213c1cc2e285e47c061af50712c69cc905dec57f4ae047193c325: Status 404 returned error can't find the container with id 70f7bb6b1e1213c1cc2e285e47c061af50712c69cc905dec57f4ae047193c325 Dec 05 10:43:54 crc kubenswrapper[4796]: I1205 10:43:54.806778 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea696700-f56d-4ca9-a810-410a2061a80e","Type":"ContainerStarted","Data":"53c20c264280f65a03dd350c3f6064b880d265d0e05677073160cb2847123c8d"} Dec 05 10:43:54 crc kubenswrapper[4796]: I1205 10:43:54.808254 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61509801-e0f4-4b5e-ac89-725b32c0fb42","Type":"ContainerStarted","Data":"28c2609a4ebce4166587e59f413d53910acbacc3831cdafba4fc11929b973292"} Dec 05 10:43:54 crc kubenswrapper[4796]: I1205 10:43:54.808281 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61509801-e0f4-4b5e-ac89-725b32c0fb42","Type":"ContainerStarted","Data":"70f7bb6b1e1213c1cc2e285e47c061af50712c69cc905dec57f4ae047193c325"} Dec 05 10:43:55 crc kubenswrapper[4796]: I1205 10:43:55.818044 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea696700-f56d-4ca9-a810-410a2061a80e","Type":"ContainerStarted","Data":"d532cc3e9f246c064db86e40450663de208d38c144308c9fe6b06e38a25d8265"} Dec 05 10:43:55 crc kubenswrapper[4796]: I1205 10:43:55.819423 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 05 10:43:55 crc kubenswrapper[4796]: I1205 10:43:55.820496 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61509801-e0f4-4b5e-ac89-725b32c0fb42","Type":"ContainerStarted","Data":"5842ddab572b06a99a92854b5cdb0bd6269b35471c509d1c5410ba0a5639b6cc"} Dec 05 10:43:55 crc kubenswrapper[4796]: I1205 10:43:55.841886 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.39873496 podStartE2EDuration="4.84187595s" podCreationTimestamp="2025-12-05 10:43:51 +0000 UTC" firstStartedPulling="2025-12-05 10:43:52.094652857 +0000 UTC m=+978.382758370" lastFinishedPulling="2025-12-05 10:43:55.537793837 +0000 UTC m=+981.825899360" observedRunningTime="2025-12-05 10:43:55.839539234 +0000 UTC m=+982.127644747" watchObservedRunningTime="2025-12-05 10:43:55.84187595 +0000 UTC m=+982.129981464" Dec 05 10:43:55 crc kubenswrapper[4796]: I1205 10:43:55.855585 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.855577156 podStartE2EDuration="2.855577156s" podCreationTimestamp="2025-12-05 10:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:43:55.853723158 +0000 UTC m=+982.141828671" watchObservedRunningTime="2025-12-05 10:43:55.855577156 +0000 UTC m=+982.143682669" Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.136798 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.182821 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c4475fdfc-65svm"] Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.183065 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" podUID="bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5" containerName="dnsmasq-dns" containerID="cri-o://d7f7ffbd72266b83d0143293970d4fadcc6ba2e0db040e4938b78d81e9139508" gracePeriod=10 Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.625711 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.702539 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctm9m\" (UniqueName: \"kubernetes.io/projected/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-kube-api-access-ctm9m\") pod \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.702585 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-ovsdbserver-nb\") pod \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.702648 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-ovsdbserver-sb\") pod \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.702776 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-dns-svc\") pod \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.702817 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-dns-swift-storage-0\") pod \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.702932 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-config\") pod \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\" (UID: \"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5\") " Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.708117 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-kube-api-access-ctm9m" (OuterVolumeSpecName: "kube-api-access-ctm9m") pod "bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5" (UID: "bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5"). InnerVolumeSpecName "kube-api-access-ctm9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.748172 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5" (UID: "bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.748398 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5" (UID: "bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.751359 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5" (UID: "bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.761972 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-config" (OuterVolumeSpecName: "config") pod "bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5" (UID: "bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.764090 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5" (UID: "bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.805864 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.805895 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.805907 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.805916 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctm9m\" (UniqueName: \"kubernetes.io/projected/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-kube-api-access-ctm9m\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.805926 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.805935 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.851084 4796 generic.go:334] "Generic (PLEG): container finished" podID="bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5" containerID="d7f7ffbd72266b83d0143293970d4fadcc6ba2e0db040e4938b78d81e9139508" exitCode=0 Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.851128 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" event={"ID":"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5","Type":"ContainerDied","Data":"d7f7ffbd72266b83d0143293970d4fadcc6ba2e0db040e4938b78d81e9139508"} Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.851185 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" event={"ID":"bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5","Type":"ContainerDied","Data":"ae92d98d1273d5f33e6c873a275ed50a4d87c0feae4f2e6491a193c5e5d4324e"} Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.851205 4796 scope.go:117] "RemoveContainer" containerID="d7f7ffbd72266b83d0143293970d4fadcc6ba2e0db040e4938b78d81e9139508" Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.851203 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c4475fdfc-65svm" Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.881386 4796 scope.go:117] "RemoveContainer" containerID="bc416c3bc2623dc22b038d73b25eca6f691f88b0fac113ed670d2093d0ccb839" Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.887039 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c4475fdfc-65svm"] Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.895277 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c4475fdfc-65svm"] Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.924011 4796 scope.go:117] "RemoveContainer" containerID="d7f7ffbd72266b83d0143293970d4fadcc6ba2e0db040e4938b78d81e9139508" Dec 05 10:43:57 crc kubenswrapper[4796]: E1205 10:43:57.924478 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7f7ffbd72266b83d0143293970d4fadcc6ba2e0db040e4938b78d81e9139508\": container with ID starting with d7f7ffbd72266b83d0143293970d4fadcc6ba2e0db040e4938b78d81e9139508 not found: ID does not exist" containerID="d7f7ffbd72266b83d0143293970d4fadcc6ba2e0db040e4938b78d81e9139508" Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.924514 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7f7ffbd72266b83d0143293970d4fadcc6ba2e0db040e4938b78d81e9139508"} err="failed to get container status \"d7f7ffbd72266b83d0143293970d4fadcc6ba2e0db040e4938b78d81e9139508\": rpc error: code = NotFound desc = could not find container \"d7f7ffbd72266b83d0143293970d4fadcc6ba2e0db040e4938b78d81e9139508\": container with ID starting with d7f7ffbd72266b83d0143293970d4fadcc6ba2e0db040e4938b78d81e9139508 not found: ID does not exist" Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.924542 4796 scope.go:117] "RemoveContainer" containerID="bc416c3bc2623dc22b038d73b25eca6f691f88b0fac113ed670d2093d0ccb839" Dec 05 10:43:57 crc kubenswrapper[4796]: E1205 10:43:57.924888 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc416c3bc2623dc22b038d73b25eca6f691f88b0fac113ed670d2093d0ccb839\": container with ID starting with bc416c3bc2623dc22b038d73b25eca6f691f88b0fac113ed670d2093d0ccb839 not found: ID does not exist" containerID="bc416c3bc2623dc22b038d73b25eca6f691f88b0fac113ed670d2093d0ccb839" Dec 05 10:43:57 crc kubenswrapper[4796]: I1205 10:43:57.924925 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc416c3bc2623dc22b038d73b25eca6f691f88b0fac113ed670d2093d0ccb839"} err="failed to get container status \"bc416c3bc2623dc22b038d73b25eca6f691f88b0fac113ed670d2093d0ccb839\": rpc error: code = NotFound desc = could not find container \"bc416c3bc2623dc22b038d73b25eca6f691f88b0fac113ed670d2093d0ccb839\": container with ID starting with bc416c3bc2623dc22b038d73b25eca6f691f88b0fac113ed670d2093d0ccb839 not found: ID does not exist" Dec 05 10:43:58 crc kubenswrapper[4796]: I1205 10:43:58.039936 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5" path="/var/lib/kubelet/pods/bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5/volumes" Dec 05 10:43:59 crc kubenswrapper[4796]: I1205 10:43:59.370225 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:59 crc kubenswrapper[4796]: I1205 10:43:59.385491 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:43:59 crc kubenswrapper[4796]: I1205 10:43:59.888512 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.002295 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-lz7nl"] Dec 05 10:44:00 crc kubenswrapper[4796]: E1205 10:44:00.002793 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5" containerName="init" Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.002810 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5" containerName="init" Dec 05 10:44:00 crc kubenswrapper[4796]: E1205 10:44:00.002829 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5" containerName="dnsmasq-dns" Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.002836 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5" containerName="dnsmasq-dns" Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.003048 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd9d55cf-0a0e-4717-8f65-cf7eba85b5a5" containerName="dnsmasq-dns" Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.003733 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lz7nl" Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.005900 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.005900 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.010844 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lz7nl"] Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.051180 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a07ea2d4-a394-444a-a8d8-14970378437e-scripts\") pod \"nova-cell1-cell-mapping-lz7nl\" (UID: \"a07ea2d4-a394-444a-a8d8-14970378437e\") " pod="openstack/nova-cell1-cell-mapping-lz7nl" Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.051226 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07ea2d4-a394-444a-a8d8-14970378437e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lz7nl\" (UID: \"a07ea2d4-a394-444a-a8d8-14970378437e\") " pod="openstack/nova-cell1-cell-mapping-lz7nl" Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.051402 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a07ea2d4-a394-444a-a8d8-14970378437e-config-data\") pod \"nova-cell1-cell-mapping-lz7nl\" (UID: \"a07ea2d4-a394-444a-a8d8-14970378437e\") " pod="openstack/nova-cell1-cell-mapping-lz7nl" Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.051482 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kfpj\" (UniqueName: \"kubernetes.io/projected/a07ea2d4-a394-444a-a8d8-14970378437e-kube-api-access-6kfpj\") pod \"nova-cell1-cell-mapping-lz7nl\" (UID: \"a07ea2d4-a394-444a-a8d8-14970378437e\") " pod="openstack/nova-cell1-cell-mapping-lz7nl" Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.153309 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a07ea2d4-a394-444a-a8d8-14970378437e-scripts\") pod \"nova-cell1-cell-mapping-lz7nl\" (UID: \"a07ea2d4-a394-444a-a8d8-14970378437e\") " pod="openstack/nova-cell1-cell-mapping-lz7nl" Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.153352 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07ea2d4-a394-444a-a8d8-14970378437e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lz7nl\" (UID: \"a07ea2d4-a394-444a-a8d8-14970378437e\") " pod="openstack/nova-cell1-cell-mapping-lz7nl" Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.153467 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a07ea2d4-a394-444a-a8d8-14970378437e-config-data\") pod \"nova-cell1-cell-mapping-lz7nl\" (UID: \"a07ea2d4-a394-444a-a8d8-14970378437e\") " pod="openstack/nova-cell1-cell-mapping-lz7nl" Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.153521 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kfpj\" (UniqueName: \"kubernetes.io/projected/a07ea2d4-a394-444a-a8d8-14970378437e-kube-api-access-6kfpj\") pod \"nova-cell1-cell-mapping-lz7nl\" (UID: \"a07ea2d4-a394-444a-a8d8-14970378437e\") " pod="openstack/nova-cell1-cell-mapping-lz7nl" Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.160474 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07ea2d4-a394-444a-a8d8-14970378437e-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-lz7nl\" (UID: \"a07ea2d4-a394-444a-a8d8-14970378437e\") " pod="openstack/nova-cell1-cell-mapping-lz7nl" Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.161140 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a07ea2d4-a394-444a-a8d8-14970378437e-config-data\") pod \"nova-cell1-cell-mapping-lz7nl\" (UID: \"a07ea2d4-a394-444a-a8d8-14970378437e\") " pod="openstack/nova-cell1-cell-mapping-lz7nl" Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.161272 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a07ea2d4-a394-444a-a8d8-14970378437e-scripts\") pod \"nova-cell1-cell-mapping-lz7nl\" (UID: \"a07ea2d4-a394-444a-a8d8-14970378437e\") " pod="openstack/nova-cell1-cell-mapping-lz7nl" Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.166772 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kfpj\" (UniqueName: \"kubernetes.io/projected/a07ea2d4-a394-444a-a8d8-14970378437e-kube-api-access-6kfpj\") pod \"nova-cell1-cell-mapping-lz7nl\" (UID: \"a07ea2d4-a394-444a-a8d8-14970378437e\") " pod="openstack/nova-cell1-cell-mapping-lz7nl" Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.317109 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lz7nl" Dec 05 10:44:00 crc kubenswrapper[4796]: E1205 10:44:00.520360 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd25dde1a_1eba_4b84_b637_134daea7451e.slice\": RecentStats: unable to find data in memory cache]" Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.718918 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-lz7nl"] Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.882062 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lz7nl" event={"ID":"a07ea2d4-a394-444a-a8d8-14970378437e","Type":"ContainerStarted","Data":"5d1b1fc3950f3d7b05037e5026958b35a39961d9db25f552ae7916169b027704"} Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.882241 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lz7nl" event={"ID":"a07ea2d4-a394-444a-a8d8-14970378437e","Type":"ContainerStarted","Data":"ed65bf1f2668ab76194059ce396b2b6c5fbe9df9af6b83ebeeeea1e961d4967e"} Dec 05 10:44:00 crc kubenswrapper[4796]: I1205 10:44:00.895725 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-lz7nl" podStartSLOduration=1.895711807 podStartE2EDuration="1.895711807s" podCreationTimestamp="2025-12-05 10:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:44:00.894010596 +0000 UTC m=+987.182116109" watchObservedRunningTime="2025-12-05 10:44:00.895711807 +0000 UTC m=+987.183817320" Dec 05 10:44:04 crc kubenswrapper[4796]: I1205 10:44:04.155620 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 10:44:04 crc kubenswrapper[4796]: I1205 10:44:04.156152 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 10:44:04 crc kubenswrapper[4796]: I1205 10:44:04.915306 4796 generic.go:334] "Generic (PLEG): container finished" podID="a07ea2d4-a394-444a-a8d8-14970378437e" containerID="5d1b1fc3950f3d7b05037e5026958b35a39961d9db25f552ae7916169b027704" exitCode=0 Dec 05 10:44:04 crc kubenswrapper[4796]: I1205 10:44:04.915349 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lz7nl" event={"ID":"a07ea2d4-a394-444a-a8d8-14970378437e","Type":"ContainerDied","Data":"5d1b1fc3950f3d7b05037e5026958b35a39961d9db25f552ae7916169b027704"} Dec 05 10:44:05 crc kubenswrapper[4796]: I1205 10:44:05.169802 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="61509801-e0f4-4b5e-ac89-725b32c0fb42" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.197:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 10:44:05 crc kubenswrapper[4796]: I1205 10:44:05.169812 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="61509801-e0f4-4b5e-ac89-725b32c0fb42" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.197:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 10:44:06 crc kubenswrapper[4796]: I1205 10:44:06.220254 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lz7nl" Dec 05 10:44:06 crc kubenswrapper[4796]: I1205 10:44:06.363612 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a07ea2d4-a394-444a-a8d8-14970378437e-scripts\") pod \"a07ea2d4-a394-444a-a8d8-14970378437e\" (UID: \"a07ea2d4-a394-444a-a8d8-14970378437e\") " Dec 05 10:44:06 crc kubenswrapper[4796]: I1205 10:44:06.363656 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kfpj\" (UniqueName: \"kubernetes.io/projected/a07ea2d4-a394-444a-a8d8-14970378437e-kube-api-access-6kfpj\") pod \"a07ea2d4-a394-444a-a8d8-14970378437e\" (UID: \"a07ea2d4-a394-444a-a8d8-14970378437e\") " Dec 05 10:44:06 crc kubenswrapper[4796]: I1205 10:44:06.363708 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a07ea2d4-a394-444a-a8d8-14970378437e-config-data\") pod \"a07ea2d4-a394-444a-a8d8-14970378437e\" (UID: \"a07ea2d4-a394-444a-a8d8-14970378437e\") " Dec 05 10:44:06 crc kubenswrapper[4796]: I1205 10:44:06.363735 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07ea2d4-a394-444a-a8d8-14970378437e-combined-ca-bundle\") pod \"a07ea2d4-a394-444a-a8d8-14970378437e\" (UID: \"a07ea2d4-a394-444a-a8d8-14970378437e\") " Dec 05 10:44:06 crc kubenswrapper[4796]: I1205 10:44:06.368674 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a07ea2d4-a394-444a-a8d8-14970378437e-scripts" (OuterVolumeSpecName: "scripts") pod "a07ea2d4-a394-444a-a8d8-14970378437e" (UID: "a07ea2d4-a394-444a-a8d8-14970378437e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:44:06 crc kubenswrapper[4796]: I1205 10:44:06.368811 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a07ea2d4-a394-444a-a8d8-14970378437e-kube-api-access-6kfpj" (OuterVolumeSpecName: "kube-api-access-6kfpj") pod "a07ea2d4-a394-444a-a8d8-14970378437e" (UID: "a07ea2d4-a394-444a-a8d8-14970378437e"). InnerVolumeSpecName "kube-api-access-6kfpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:44:06 crc kubenswrapper[4796]: I1205 10:44:06.385612 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a07ea2d4-a394-444a-a8d8-14970378437e-config-data" (OuterVolumeSpecName: "config-data") pod "a07ea2d4-a394-444a-a8d8-14970378437e" (UID: "a07ea2d4-a394-444a-a8d8-14970378437e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:44:06 crc kubenswrapper[4796]: I1205 10:44:06.386227 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a07ea2d4-a394-444a-a8d8-14970378437e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a07ea2d4-a394-444a-a8d8-14970378437e" (UID: "a07ea2d4-a394-444a-a8d8-14970378437e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:44:06 crc kubenswrapper[4796]: I1205 10:44:06.465796 4796 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a07ea2d4-a394-444a-a8d8-14970378437e-scripts\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:06 crc kubenswrapper[4796]: I1205 10:44:06.465826 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kfpj\" (UniqueName: \"kubernetes.io/projected/a07ea2d4-a394-444a-a8d8-14970378437e-kube-api-access-6kfpj\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:06 crc kubenswrapper[4796]: I1205 10:44:06.465838 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a07ea2d4-a394-444a-a8d8-14970378437e-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:06 crc kubenswrapper[4796]: I1205 10:44:06.465848 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a07ea2d4-a394-444a-a8d8-14970378437e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:06 crc kubenswrapper[4796]: I1205 10:44:06.931487 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-lz7nl" event={"ID":"a07ea2d4-a394-444a-a8d8-14970378437e","Type":"ContainerDied","Data":"ed65bf1f2668ab76194059ce396b2b6c5fbe9df9af6b83ebeeeea1e961d4967e"} Dec 05 10:44:06 crc kubenswrapper[4796]: I1205 10:44:06.931821 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed65bf1f2668ab76194059ce396b2b6c5fbe9df9af6b83ebeeeea1e961d4967e" Dec 05 10:44:06 crc kubenswrapper[4796]: I1205 10:44:06.931883 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-lz7nl" Dec 05 10:44:07 crc kubenswrapper[4796]: I1205 10:44:07.090019 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 10:44:07 crc kubenswrapper[4796]: I1205 10:44:07.090402 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="61509801-e0f4-4b5e-ac89-725b32c0fb42" containerName="nova-api-log" containerID="cri-o://28c2609a4ebce4166587e59f413d53910acbacc3831cdafba4fc11929b973292" gracePeriod=30 Dec 05 10:44:07 crc kubenswrapper[4796]: I1205 10:44:07.090512 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="61509801-e0f4-4b5e-ac89-725b32c0fb42" containerName="nova-api-api" containerID="cri-o://5842ddab572b06a99a92854b5cdb0bd6269b35471c509d1c5410ba0a5639b6cc" gracePeriod=30 Dec 05 10:44:07 crc kubenswrapper[4796]: I1205 10:44:07.109603 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 10:44:07 crc kubenswrapper[4796]: I1205 10:44:07.109865 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cecbf859-aee4-42fc-9ec0-6e552f405df6" containerName="nova-metadata-log" containerID="cri-o://b5d46b11549ee601e73a39682a5dea69a95488b41bb630d7e86c0c6c5c2fa0af" gracePeriod=30 Dec 05 10:44:07 crc kubenswrapper[4796]: I1205 10:44:07.109978 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cecbf859-aee4-42fc-9ec0-6e552f405df6" containerName="nova-metadata-metadata" containerID="cri-o://fd1c85398e52080d36f578e0ab0bcead467de3c88188b6a896613c4113e7eb28" gracePeriod=30 Dec 05 10:44:07 crc kubenswrapper[4796]: I1205 10:44:07.126968 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 10:44:07 crc kubenswrapper[4796]: I1205 10:44:07.127231 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="19820d25-5afd-4e22-b618-d7ee6e8d36b4" containerName="nova-scheduler-scheduler" containerID="cri-o://bf90440e7964caf3929151b8bd2434289dc07e0cf44c35beb722c0776ce0c78d" gracePeriod=30 Dec 05 10:44:07 crc kubenswrapper[4796]: I1205 10:44:07.939656 4796 generic.go:334] "Generic (PLEG): container finished" podID="cecbf859-aee4-42fc-9ec0-6e552f405df6" containerID="b5d46b11549ee601e73a39682a5dea69a95488b41bb630d7e86c0c6c5c2fa0af" exitCode=143 Dec 05 10:44:07 crc kubenswrapper[4796]: I1205 10:44:07.939782 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cecbf859-aee4-42fc-9ec0-6e552f405df6","Type":"ContainerDied","Data":"b5d46b11549ee601e73a39682a5dea69a95488b41bb630d7e86c0c6c5c2fa0af"} Dec 05 10:44:07 crc kubenswrapper[4796]: I1205 10:44:07.941425 4796 generic.go:334] "Generic (PLEG): container finished" podID="61509801-e0f4-4b5e-ac89-725b32c0fb42" containerID="28c2609a4ebce4166587e59f413d53910acbacc3831cdafba4fc11929b973292" exitCode=143 Dec 05 10:44:07 crc kubenswrapper[4796]: I1205 10:44:07.941485 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61509801-e0f4-4b5e-ac89-725b32c0fb42","Type":"ContainerDied","Data":"28c2609a4ebce4166587e59f413d53910acbacc3831cdafba4fc11929b973292"} Dec 05 10:44:08 crc kubenswrapper[4796]: I1205 10:44:08.950290 4796 generic.go:334] "Generic (PLEG): container finished" podID="19820d25-5afd-4e22-b618-d7ee6e8d36b4" containerID="bf90440e7964caf3929151b8bd2434289dc07e0cf44c35beb722c0776ce0c78d" exitCode=0 Dec 05 10:44:08 crc kubenswrapper[4796]: I1205 10:44:08.950330 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"19820d25-5afd-4e22-b618-d7ee6e8d36b4","Type":"ContainerDied","Data":"bf90440e7964caf3929151b8bd2434289dc07e0cf44c35beb722c0776ce0c78d"} Dec 05 10:44:09 crc kubenswrapper[4796]: I1205 10:44:09.017376 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 10:44:09 crc kubenswrapper[4796]: I1205 10:44:09.205974 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19820d25-5afd-4e22-b618-d7ee6e8d36b4-config-data\") pod \"19820d25-5afd-4e22-b618-d7ee6e8d36b4\" (UID: \"19820d25-5afd-4e22-b618-d7ee6e8d36b4\") " Dec 05 10:44:09 crc kubenswrapper[4796]: I1205 10:44:09.206034 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19820d25-5afd-4e22-b618-d7ee6e8d36b4-combined-ca-bundle\") pod \"19820d25-5afd-4e22-b618-d7ee6e8d36b4\" (UID: \"19820d25-5afd-4e22-b618-d7ee6e8d36b4\") " Dec 05 10:44:09 crc kubenswrapper[4796]: I1205 10:44:09.206218 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlmpd\" (UniqueName: \"kubernetes.io/projected/19820d25-5afd-4e22-b618-d7ee6e8d36b4-kube-api-access-dlmpd\") pod \"19820d25-5afd-4e22-b618-d7ee6e8d36b4\" (UID: \"19820d25-5afd-4e22-b618-d7ee6e8d36b4\") " Dec 05 10:44:09 crc kubenswrapper[4796]: I1205 10:44:09.229808 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19820d25-5afd-4e22-b618-d7ee6e8d36b4-kube-api-access-dlmpd" (OuterVolumeSpecName: "kube-api-access-dlmpd") pod "19820d25-5afd-4e22-b618-d7ee6e8d36b4" (UID: "19820d25-5afd-4e22-b618-d7ee6e8d36b4"). InnerVolumeSpecName "kube-api-access-dlmpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:44:09 crc kubenswrapper[4796]: I1205 10:44:09.229836 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19820d25-5afd-4e22-b618-d7ee6e8d36b4-config-data" (OuterVolumeSpecName: "config-data") pod "19820d25-5afd-4e22-b618-d7ee6e8d36b4" (UID: "19820d25-5afd-4e22-b618-d7ee6e8d36b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:44:09 crc kubenswrapper[4796]: I1205 10:44:09.231729 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19820d25-5afd-4e22-b618-d7ee6e8d36b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19820d25-5afd-4e22-b618-d7ee6e8d36b4" (UID: "19820d25-5afd-4e22-b618-d7ee6e8d36b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:44:09 crc kubenswrapper[4796]: I1205 10:44:09.308235 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19820d25-5afd-4e22-b618-d7ee6e8d36b4-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:09 crc kubenswrapper[4796]: I1205 10:44:09.308473 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19820d25-5afd-4e22-b618-d7ee6e8d36b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:09 crc kubenswrapper[4796]: I1205 10:44:09.308541 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlmpd\" (UniqueName: \"kubernetes.io/projected/19820d25-5afd-4e22-b618-d7ee6e8d36b4-kube-api-access-dlmpd\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:09 crc kubenswrapper[4796]: I1205 10:44:09.959892 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"19820d25-5afd-4e22-b618-d7ee6e8d36b4","Type":"ContainerDied","Data":"c6046f9913a98f602830f2d2dab91757e14a9a77bdc2582ad4a2f16b7f369b44"} Dec 05 10:44:09 crc kubenswrapper[4796]: I1205 10:44:09.959971 4796 scope.go:117] "RemoveContainer" containerID="bf90440e7964caf3929151b8bd2434289dc07e0cf44c35beb722c0776ce0c78d" Dec 05 10:44:09 crc kubenswrapper[4796]: I1205 10:44:09.960153 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 10:44:09 crc kubenswrapper[4796]: I1205 10:44:09.990279 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.004846 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.013918 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 10:44:10 crc kubenswrapper[4796]: E1205 10:44:10.014359 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19820d25-5afd-4e22-b618-d7ee6e8d36b4" containerName="nova-scheduler-scheduler" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.014394 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="19820d25-5afd-4e22-b618-d7ee6e8d36b4" containerName="nova-scheduler-scheduler" Dec 05 10:44:10 crc kubenswrapper[4796]: E1205 10:44:10.014422 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a07ea2d4-a394-444a-a8d8-14970378437e" containerName="nova-manage" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.014428 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a07ea2d4-a394-444a-a8d8-14970378437e" containerName="nova-manage" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.014629 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a07ea2d4-a394-444a-a8d8-14970378437e" containerName="nova-manage" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.014662 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="19820d25-5afd-4e22-b618-d7ee6e8d36b4" containerName="nova-scheduler-scheduler" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.015304 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.017084 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.018809 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1935b08-3dfe-4990-9296-634f2dde999f-config-data\") pod \"nova-scheduler-0\" (UID: \"e1935b08-3dfe-4990-9296-634f2dde999f\") " pod="openstack/nova-scheduler-0" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.018901 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1935b08-3dfe-4990-9296-634f2dde999f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e1935b08-3dfe-4990-9296-634f2dde999f\") " pod="openstack/nova-scheduler-0" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.018990 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqrsg\" (UniqueName: \"kubernetes.io/projected/e1935b08-3dfe-4990-9296-634f2dde999f-kube-api-access-hqrsg\") pod \"nova-scheduler-0\" (UID: \"e1935b08-3dfe-4990-9296-634f2dde999f\") " pod="openstack/nova-scheduler-0" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.019862 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.041709 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19820d25-5afd-4e22-b618-d7ee6e8d36b4" path="/var/lib/kubelet/pods/19820d25-5afd-4e22-b618-d7ee6e8d36b4/volumes" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.120329 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqrsg\" (UniqueName: \"kubernetes.io/projected/e1935b08-3dfe-4990-9296-634f2dde999f-kube-api-access-hqrsg\") pod \"nova-scheduler-0\" (UID: \"e1935b08-3dfe-4990-9296-634f2dde999f\") " pod="openstack/nova-scheduler-0" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.120396 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1935b08-3dfe-4990-9296-634f2dde999f-config-data\") pod \"nova-scheduler-0\" (UID: \"e1935b08-3dfe-4990-9296-634f2dde999f\") " pod="openstack/nova-scheduler-0" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.120452 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1935b08-3dfe-4990-9296-634f2dde999f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e1935b08-3dfe-4990-9296-634f2dde999f\") " pod="openstack/nova-scheduler-0" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.124241 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1935b08-3dfe-4990-9296-634f2dde999f-config-data\") pod \"nova-scheduler-0\" (UID: \"e1935b08-3dfe-4990-9296-634f2dde999f\") " pod="openstack/nova-scheduler-0" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.127482 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1935b08-3dfe-4990-9296-634f2dde999f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e1935b08-3dfe-4990-9296-634f2dde999f\") " pod="openstack/nova-scheduler-0" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.132892 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqrsg\" (UniqueName: \"kubernetes.io/projected/e1935b08-3dfe-4990-9296-634f2dde999f-kube-api-access-hqrsg\") pod \"nova-scheduler-0\" (UID: \"e1935b08-3dfe-4990-9296-634f2dde999f\") " pod="openstack/nova-scheduler-0" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.229378 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="cecbf859-aee4-42fc-9ec0-6e552f405df6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": read tcp 10.217.0.2:38196->10.217.0.189:8775: read: connection reset by peer" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.229418 4796 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="cecbf859-aee4-42fc-9ec0-6e552f405df6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.189:8775/\": read tcp 10.217.0.2:38208->10.217.0.189:8775: read: connection reset by peer" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.332103 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.629581 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.658259 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 10:44:10 crc kubenswrapper[4796]: E1205 10:44:10.756301 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd25dde1a_1eba_4b84_b637_134daea7451e.slice\": RecentStats: unable to find data in memory cache]" Dec 05 10:44:10 crc kubenswrapper[4796]: W1205 10:44:10.793857 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1935b08_3dfe_4990_9296_634f2dde999f.slice/crio-e9d1a76caab08a275fd366d871dee068f533ceb1806fab0e90e0d634e572443e WatchSource:0}: Error finding container e9d1a76caab08a275fd366d871dee068f533ceb1806fab0e90e0d634e572443e: Status 404 returned error can't find the container with id e9d1a76caab08a275fd366d871dee068f533ceb1806fab0e90e0d634e572443e Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.797938 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.830748 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cecbf859-aee4-42fc-9ec0-6e552f405df6-config-data\") pod \"cecbf859-aee4-42fc-9ec0-6e552f405df6\" (UID: \"cecbf859-aee4-42fc-9ec0-6e552f405df6\") " Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.830849 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-internal-tls-certs\") pod \"61509801-e0f4-4b5e-ac89-725b32c0fb42\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.830950 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-combined-ca-bundle\") pod \"61509801-e0f4-4b5e-ac89-725b32c0fb42\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.830976 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrnz5\" (UniqueName: \"kubernetes.io/projected/61509801-e0f4-4b5e-ac89-725b32c0fb42-kube-api-access-lrnz5\") pod \"61509801-e0f4-4b5e-ac89-725b32c0fb42\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.831016 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkzfz\" (UniqueName: \"kubernetes.io/projected/cecbf859-aee4-42fc-9ec0-6e552f405df6-kube-api-access-tkzfz\") pod \"cecbf859-aee4-42fc-9ec0-6e552f405df6\" (UID: \"cecbf859-aee4-42fc-9ec0-6e552f405df6\") " Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.831056 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-config-data\") pod \"61509801-e0f4-4b5e-ac89-725b32c0fb42\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.831133 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cecbf859-aee4-42fc-9ec0-6e552f405df6-combined-ca-bundle\") pod \"cecbf859-aee4-42fc-9ec0-6e552f405df6\" (UID: \"cecbf859-aee4-42fc-9ec0-6e552f405df6\") " Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.831166 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cecbf859-aee4-42fc-9ec0-6e552f405df6-nova-metadata-tls-certs\") pod \"cecbf859-aee4-42fc-9ec0-6e552f405df6\" (UID: \"cecbf859-aee4-42fc-9ec0-6e552f405df6\") " Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.831193 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cecbf859-aee4-42fc-9ec0-6e552f405df6-logs\") pod \"cecbf859-aee4-42fc-9ec0-6e552f405df6\" (UID: \"cecbf859-aee4-42fc-9ec0-6e552f405df6\") " Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.831213 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61509801-e0f4-4b5e-ac89-725b32c0fb42-logs\") pod \"61509801-e0f4-4b5e-ac89-725b32c0fb42\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.831270 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-public-tls-certs\") pod \"61509801-e0f4-4b5e-ac89-725b32c0fb42\" (UID: \"61509801-e0f4-4b5e-ac89-725b32c0fb42\") " Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.832296 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61509801-e0f4-4b5e-ac89-725b32c0fb42-logs" (OuterVolumeSpecName: "logs") pod "61509801-e0f4-4b5e-ac89-725b32c0fb42" (UID: "61509801-e0f4-4b5e-ac89-725b32c0fb42"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.832312 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cecbf859-aee4-42fc-9ec0-6e552f405df6-logs" (OuterVolumeSpecName: "logs") pod "cecbf859-aee4-42fc-9ec0-6e552f405df6" (UID: "cecbf859-aee4-42fc-9ec0-6e552f405df6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.833985 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61509801-e0f4-4b5e-ac89-725b32c0fb42-kube-api-access-lrnz5" (OuterVolumeSpecName: "kube-api-access-lrnz5") pod "61509801-e0f4-4b5e-ac89-725b32c0fb42" (UID: "61509801-e0f4-4b5e-ac89-725b32c0fb42"). InnerVolumeSpecName "kube-api-access-lrnz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.835038 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cecbf859-aee4-42fc-9ec0-6e552f405df6-kube-api-access-tkzfz" (OuterVolumeSpecName: "kube-api-access-tkzfz") pod "cecbf859-aee4-42fc-9ec0-6e552f405df6" (UID: "cecbf859-aee4-42fc-9ec0-6e552f405df6"). InnerVolumeSpecName "kube-api-access-tkzfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.850549 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61509801-e0f4-4b5e-ac89-725b32c0fb42" (UID: "61509801-e0f4-4b5e-ac89-725b32c0fb42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.855223 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cecbf859-aee4-42fc-9ec0-6e552f405df6-config-data" (OuterVolumeSpecName: "config-data") pod "cecbf859-aee4-42fc-9ec0-6e552f405df6" (UID: "cecbf859-aee4-42fc-9ec0-6e552f405df6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.856750 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cecbf859-aee4-42fc-9ec0-6e552f405df6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cecbf859-aee4-42fc-9ec0-6e552f405df6" (UID: "cecbf859-aee4-42fc-9ec0-6e552f405df6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.857159 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-config-data" (OuterVolumeSpecName: "config-data") pod "61509801-e0f4-4b5e-ac89-725b32c0fb42" (UID: "61509801-e0f4-4b5e-ac89-725b32c0fb42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.877018 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cecbf859-aee4-42fc-9ec0-6e552f405df6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "cecbf859-aee4-42fc-9ec0-6e552f405df6" (UID: "cecbf859-aee4-42fc-9ec0-6e552f405df6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.877379 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "61509801-e0f4-4b5e-ac89-725b32c0fb42" (UID: "61509801-e0f4-4b5e-ac89-725b32c0fb42"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.879446 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "61509801-e0f4-4b5e-ac89-725b32c0fb42" (UID: "61509801-e0f4-4b5e-ac89-725b32c0fb42"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.933211 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.933245 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrnz5\" (UniqueName: \"kubernetes.io/projected/61509801-e0f4-4b5e-ac89-725b32c0fb42-kube-api-access-lrnz5\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.933258 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkzfz\" (UniqueName: \"kubernetes.io/projected/cecbf859-aee4-42fc-9ec0-6e552f405df6-kube-api-access-tkzfz\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.933267 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.933277 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cecbf859-aee4-42fc-9ec0-6e552f405df6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.933285 4796 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cecbf859-aee4-42fc-9ec0-6e552f405df6-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.933294 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cecbf859-aee4-42fc-9ec0-6e552f405df6-logs\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.933302 4796 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61509801-e0f4-4b5e-ac89-725b32c0fb42-logs\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.933310 4796 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.933317 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cecbf859-aee4-42fc-9ec0-6e552f405df6-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.933329 4796 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/61509801-e0f4-4b5e-ac89-725b32c0fb42-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.969082 4796 generic.go:334] "Generic (PLEG): container finished" podID="cecbf859-aee4-42fc-9ec0-6e552f405df6" containerID="fd1c85398e52080d36f578e0ab0bcead467de3c88188b6a896613c4113e7eb28" exitCode=0 Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.969118 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cecbf859-aee4-42fc-9ec0-6e552f405df6","Type":"ContainerDied","Data":"fd1c85398e52080d36f578e0ab0bcead467de3c88188b6a896613c4113e7eb28"} Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.969136 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.969156 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cecbf859-aee4-42fc-9ec0-6e552f405df6","Type":"ContainerDied","Data":"dbae647f08c51135c3fd7a31ece5c757c644af5c3222ca239c4bace6ece2003c"} Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.969177 4796 scope.go:117] "RemoveContainer" containerID="fd1c85398e52080d36f578e0ab0bcead467de3c88188b6a896613c4113e7eb28" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.971906 4796 generic.go:334] "Generic (PLEG): container finished" podID="61509801-e0f4-4b5e-ac89-725b32c0fb42" containerID="5842ddab572b06a99a92854b5cdb0bd6269b35471c509d1c5410ba0a5639b6cc" exitCode=0 Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.971985 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.972131 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61509801-e0f4-4b5e-ac89-725b32c0fb42","Type":"ContainerDied","Data":"5842ddab572b06a99a92854b5cdb0bd6269b35471c509d1c5410ba0a5639b6cc"} Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.972196 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61509801-e0f4-4b5e-ac89-725b32c0fb42","Type":"ContainerDied","Data":"70f7bb6b1e1213c1cc2e285e47c061af50712c69cc905dec57f4ae047193c325"} Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.973521 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e1935b08-3dfe-4990-9296-634f2dde999f","Type":"ContainerStarted","Data":"8311b8989f0a5ed0857dc78db8bed65c486bf4d1beec7768df856befc3e39d97"} Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.973547 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e1935b08-3dfe-4990-9296-634f2dde999f","Type":"ContainerStarted","Data":"e9d1a76caab08a275fd366d871dee068f533ceb1806fab0e90e0d634e572443e"} Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.990950 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.990937094 podStartE2EDuration="1.990937094s" podCreationTimestamp="2025-12-05 10:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:44:10.989105368 +0000 UTC m=+997.277210882" watchObservedRunningTime="2025-12-05 10:44:10.990937094 +0000 UTC m=+997.279042607" Dec 05 10:44:10 crc kubenswrapper[4796]: I1205 10:44:10.991283 4796 scope.go:117] "RemoveContainer" containerID="b5d46b11549ee601e73a39682a5dea69a95488b41bb630d7e86c0c6c5c2fa0af" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.003861 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.009440 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.009650 4796 scope.go:117] "RemoveContainer" containerID="fd1c85398e52080d36f578e0ab0bcead467de3c88188b6a896613c4113e7eb28" Dec 05 10:44:11 crc kubenswrapper[4796]: E1205 10:44:11.010010 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd1c85398e52080d36f578e0ab0bcead467de3c88188b6a896613c4113e7eb28\": container with ID starting with fd1c85398e52080d36f578e0ab0bcead467de3c88188b6a896613c4113e7eb28 not found: ID does not exist" containerID="fd1c85398e52080d36f578e0ab0bcead467de3c88188b6a896613c4113e7eb28" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.010038 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd1c85398e52080d36f578e0ab0bcead467de3c88188b6a896613c4113e7eb28"} err="failed to get container status \"fd1c85398e52080d36f578e0ab0bcead467de3c88188b6a896613c4113e7eb28\": rpc error: code = NotFound desc = could not find container \"fd1c85398e52080d36f578e0ab0bcead467de3c88188b6a896613c4113e7eb28\": container with ID starting with fd1c85398e52080d36f578e0ab0bcead467de3c88188b6a896613c4113e7eb28 not found: ID does not exist" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.010056 4796 scope.go:117] "RemoveContainer" containerID="b5d46b11549ee601e73a39682a5dea69a95488b41bb630d7e86c0c6c5c2fa0af" Dec 05 10:44:11 crc kubenswrapper[4796]: E1205 10:44:11.011089 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5d46b11549ee601e73a39682a5dea69a95488b41bb630d7e86c0c6c5c2fa0af\": container with ID starting with b5d46b11549ee601e73a39682a5dea69a95488b41bb630d7e86c0c6c5c2fa0af not found: ID does not exist" containerID="b5d46b11549ee601e73a39682a5dea69a95488b41bb630d7e86c0c6c5c2fa0af" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.011123 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5d46b11549ee601e73a39682a5dea69a95488b41bb630d7e86c0c6c5c2fa0af"} err="failed to get container status \"b5d46b11549ee601e73a39682a5dea69a95488b41bb630d7e86c0c6c5c2fa0af\": rpc error: code = NotFound desc = could not find container \"b5d46b11549ee601e73a39682a5dea69a95488b41bb630d7e86c0c6c5c2fa0af\": container with ID starting with b5d46b11549ee601e73a39682a5dea69a95488b41bb630d7e86c0c6c5c2fa0af not found: ID does not exist" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.011147 4796 scope.go:117] "RemoveContainer" containerID="5842ddab572b06a99a92854b5cdb0bd6269b35471c509d1c5410ba0a5639b6cc" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.017379 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.026724 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.033159 4796 scope.go:117] "RemoveContainer" containerID="28c2609a4ebce4166587e59f413d53910acbacc3831cdafba4fc11929b973292" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.036340 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 05 10:44:11 crc kubenswrapper[4796]: E1205 10:44:11.036746 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cecbf859-aee4-42fc-9ec0-6e552f405df6" containerName="nova-metadata-metadata" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.036763 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="cecbf859-aee4-42fc-9ec0-6e552f405df6" containerName="nova-metadata-metadata" Dec 05 10:44:11 crc kubenswrapper[4796]: E1205 10:44:11.036782 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cecbf859-aee4-42fc-9ec0-6e552f405df6" containerName="nova-metadata-log" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.036788 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="cecbf859-aee4-42fc-9ec0-6e552f405df6" containerName="nova-metadata-log" Dec 05 10:44:11 crc kubenswrapper[4796]: E1205 10:44:11.036805 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61509801-e0f4-4b5e-ac89-725b32c0fb42" containerName="nova-api-log" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.036821 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="61509801-e0f4-4b5e-ac89-725b32c0fb42" containerName="nova-api-log" Dec 05 10:44:11 crc kubenswrapper[4796]: E1205 10:44:11.036844 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61509801-e0f4-4b5e-ac89-725b32c0fb42" containerName="nova-api-api" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.036852 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="61509801-e0f4-4b5e-ac89-725b32c0fb42" containerName="nova-api-api" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.037036 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="cecbf859-aee4-42fc-9ec0-6e552f405df6" containerName="nova-metadata-log" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.037060 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="61509801-e0f4-4b5e-ac89-725b32c0fb42" containerName="nova-api-log" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.037074 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="61509801-e0f4-4b5e-ac89-725b32c0fb42" containerName="nova-api-api" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.037083 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="cecbf859-aee4-42fc-9ec0-6e552f405df6" containerName="nova-metadata-metadata" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.038026 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.040004 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.040202 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.044221 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.045451 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.051559 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.051989 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.052121 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.052158 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.052119 4796 scope.go:117] "RemoveContainer" containerID="5842ddab572b06a99a92854b5cdb0bd6269b35471c509d1c5410ba0a5639b6cc" Dec 05 10:44:11 crc kubenswrapper[4796]: E1205 10:44:11.056302 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5842ddab572b06a99a92854b5cdb0bd6269b35471c509d1c5410ba0a5639b6cc\": container with ID starting with 5842ddab572b06a99a92854b5cdb0bd6269b35471c509d1c5410ba0a5639b6cc not found: ID does not exist" containerID="5842ddab572b06a99a92854b5cdb0bd6269b35471c509d1c5410ba0a5639b6cc" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.056353 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5842ddab572b06a99a92854b5cdb0bd6269b35471c509d1c5410ba0a5639b6cc"} err="failed to get container status \"5842ddab572b06a99a92854b5cdb0bd6269b35471c509d1c5410ba0a5639b6cc\": rpc error: code = NotFound desc = could not find container \"5842ddab572b06a99a92854b5cdb0bd6269b35471c509d1c5410ba0a5639b6cc\": container with ID starting with 5842ddab572b06a99a92854b5cdb0bd6269b35471c509d1c5410ba0a5639b6cc not found: ID does not exist" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.056380 4796 scope.go:117] "RemoveContainer" containerID="28c2609a4ebce4166587e59f413d53910acbacc3831cdafba4fc11929b973292" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.056882 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 10:44:11 crc kubenswrapper[4796]: E1205 10:44:11.062475 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28c2609a4ebce4166587e59f413d53910acbacc3831cdafba4fc11929b973292\": container with ID starting with 28c2609a4ebce4166587e59f413d53910acbacc3831cdafba4fc11929b973292 not found: ID does not exist" containerID="28c2609a4ebce4166587e59f413d53910acbacc3831cdafba4fc11929b973292" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.062518 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28c2609a4ebce4166587e59f413d53910acbacc3831cdafba4fc11929b973292"} err="failed to get container status \"28c2609a4ebce4166587e59f413d53910acbacc3831cdafba4fc11929b973292\": rpc error: code = NotFound desc = could not find container \"28c2609a4ebce4166587e59f413d53910acbacc3831cdafba4fc11929b973292\": container with ID starting with 28c2609a4ebce4166587e59f413d53910acbacc3831cdafba4fc11929b973292 not found: ID does not exist" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.239442 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76\") " pod="openstack/nova-metadata-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.239722 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0b532d4-1ab1-4d26-ac86-269a32a1bade-public-tls-certs\") pod \"nova-api-0\" (UID: \"f0b532d4-1ab1-4d26-ac86-269a32a1bade\") " pod="openstack/nova-api-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.239752 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76\") " pod="openstack/nova-metadata-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.239783 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76-config-data\") pod \"nova-metadata-0\" (UID: \"d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76\") " pod="openstack/nova-metadata-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.239798 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0b532d4-1ab1-4d26-ac86-269a32a1bade-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f0b532d4-1ab1-4d26-ac86-269a32a1bade\") " pod="openstack/nova-api-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.239819 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z2hq\" (UniqueName: \"kubernetes.io/projected/d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76-kube-api-access-2z2hq\") pod \"nova-metadata-0\" (UID: \"d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76\") " pod="openstack/nova-metadata-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.239882 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0b532d4-1ab1-4d26-ac86-269a32a1bade-logs\") pod \"nova-api-0\" (UID: \"f0b532d4-1ab1-4d26-ac86-269a32a1bade\") " pod="openstack/nova-api-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.239898 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b532d4-1ab1-4d26-ac86-269a32a1bade-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f0b532d4-1ab1-4d26-ac86-269a32a1bade\") " pod="openstack/nova-api-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.239915 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76-logs\") pod \"nova-metadata-0\" (UID: \"d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76\") " pod="openstack/nova-metadata-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.239930 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0b532d4-1ab1-4d26-ac86-269a32a1bade-config-data\") pod \"nova-api-0\" (UID: \"f0b532d4-1ab1-4d26-ac86-269a32a1bade\") " pod="openstack/nova-api-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.239963 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx5xc\" (UniqueName: \"kubernetes.io/projected/f0b532d4-1ab1-4d26-ac86-269a32a1bade-kube-api-access-qx5xc\") pod \"nova-api-0\" (UID: \"f0b532d4-1ab1-4d26-ac86-269a32a1bade\") " pod="openstack/nova-api-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.342103 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx5xc\" (UniqueName: \"kubernetes.io/projected/f0b532d4-1ab1-4d26-ac86-269a32a1bade-kube-api-access-qx5xc\") pod \"nova-api-0\" (UID: \"f0b532d4-1ab1-4d26-ac86-269a32a1bade\") " pod="openstack/nova-api-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.342188 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76\") " pod="openstack/nova-metadata-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.342207 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0b532d4-1ab1-4d26-ac86-269a32a1bade-public-tls-certs\") pod \"nova-api-0\" (UID: \"f0b532d4-1ab1-4d26-ac86-269a32a1bade\") " pod="openstack/nova-api-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.342237 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76\") " pod="openstack/nova-metadata-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.342263 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0b532d4-1ab1-4d26-ac86-269a32a1bade-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f0b532d4-1ab1-4d26-ac86-269a32a1bade\") " pod="openstack/nova-api-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.342279 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76-config-data\") pod \"nova-metadata-0\" (UID: \"d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76\") " pod="openstack/nova-metadata-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.342298 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z2hq\" (UniqueName: \"kubernetes.io/projected/d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76-kube-api-access-2z2hq\") pod \"nova-metadata-0\" (UID: \"d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76\") " pod="openstack/nova-metadata-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.342364 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0b532d4-1ab1-4d26-ac86-269a32a1bade-logs\") pod \"nova-api-0\" (UID: \"f0b532d4-1ab1-4d26-ac86-269a32a1bade\") " pod="openstack/nova-api-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.342378 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b532d4-1ab1-4d26-ac86-269a32a1bade-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f0b532d4-1ab1-4d26-ac86-269a32a1bade\") " pod="openstack/nova-api-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.342405 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76-logs\") pod \"nova-metadata-0\" (UID: \"d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76\") " pod="openstack/nova-metadata-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.342421 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0b532d4-1ab1-4d26-ac86-269a32a1bade-config-data\") pod \"nova-api-0\" (UID: \"f0b532d4-1ab1-4d26-ac86-269a32a1bade\") " pod="openstack/nova-api-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.343405 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0b532d4-1ab1-4d26-ac86-269a32a1bade-logs\") pod \"nova-api-0\" (UID: \"f0b532d4-1ab1-4d26-ac86-269a32a1bade\") " pod="openstack/nova-api-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.343942 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76-logs\") pod \"nova-metadata-0\" (UID: \"d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76\") " pod="openstack/nova-metadata-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.347031 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76\") " pod="openstack/nova-metadata-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.347046 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0b532d4-1ab1-4d26-ac86-269a32a1bade-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f0b532d4-1ab1-4d26-ac86-269a32a1bade\") " pod="openstack/nova-api-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.352167 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76\") " pod="openstack/nova-metadata-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.352225 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0b532d4-1ab1-4d26-ac86-269a32a1bade-config-data\") pod \"nova-api-0\" (UID: \"f0b532d4-1ab1-4d26-ac86-269a32a1bade\") " pod="openstack/nova-api-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.352279 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76-config-data\") pod \"nova-metadata-0\" (UID: \"d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76\") " pod="openstack/nova-metadata-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.352763 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0b532d4-1ab1-4d26-ac86-269a32a1bade-public-tls-certs\") pod \"nova-api-0\" (UID: \"f0b532d4-1ab1-4d26-ac86-269a32a1bade\") " pod="openstack/nova-api-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.352840 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b532d4-1ab1-4d26-ac86-269a32a1bade-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f0b532d4-1ab1-4d26-ac86-269a32a1bade\") " pod="openstack/nova-api-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.357966 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z2hq\" (UniqueName: \"kubernetes.io/projected/d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76-kube-api-access-2z2hq\") pod \"nova-metadata-0\" (UID: \"d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76\") " pod="openstack/nova-metadata-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.358768 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx5xc\" (UniqueName: \"kubernetes.io/projected/f0b532d4-1ab1-4d26-ac86-269a32a1bade-kube-api-access-qx5xc\") pod \"nova-api-0\" (UID: \"f0b532d4-1ab1-4d26-ac86-269a32a1bade\") " pod="openstack/nova-api-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.363818 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.372082 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.764617 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.812418 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 05 10:44:11 crc kubenswrapper[4796]: W1205 10:44:11.812907 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0b532d4_1ab1_4d26_ac86_269a32a1bade.slice/crio-e80e443c9b14baba2c0ca03e8a1e008d98edd8a20446db2adcc2c5bfe0c1b296 WatchSource:0}: Error finding container e80e443c9b14baba2c0ca03e8a1e008d98edd8a20446db2adcc2c5bfe0c1b296: Status 404 returned error can't find the container with id e80e443c9b14baba2c0ca03e8a1e008d98edd8a20446db2adcc2c5bfe0c1b296 Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.987853 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76","Type":"ContainerStarted","Data":"9576cb0b5c2ee90abbccb483be753530de894f4d0edad2dfe2d4e88592d06436"} Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.988047 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76","Type":"ContainerStarted","Data":"67feb8953330c531b75736e95e19a0537821953ddfa13f06b544e698bb6fcd54"} Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.990404 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0b532d4-1ab1-4d26-ac86-269a32a1bade","Type":"ContainerStarted","Data":"86f000909609570486abe4c8d8bfabb987eed16f8b84195d9948f6199b66c271"} Dec 05 10:44:11 crc kubenswrapper[4796]: I1205 10:44:11.990422 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0b532d4-1ab1-4d26-ac86-269a32a1bade","Type":"ContainerStarted","Data":"e80e443c9b14baba2c0ca03e8a1e008d98edd8a20446db2adcc2c5bfe0c1b296"} Dec 05 10:44:12 crc kubenswrapper[4796]: I1205 10:44:12.040174 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61509801-e0f4-4b5e-ac89-725b32c0fb42" path="/var/lib/kubelet/pods/61509801-e0f4-4b5e-ac89-725b32c0fb42/volumes" Dec 05 10:44:12 crc kubenswrapper[4796]: I1205 10:44:12.040883 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cecbf859-aee4-42fc-9ec0-6e552f405df6" path="/var/lib/kubelet/pods/cecbf859-aee4-42fc-9ec0-6e552f405df6/volumes" Dec 05 10:44:12 crc kubenswrapper[4796]: I1205 10:44:12.998044 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76","Type":"ContainerStarted","Data":"ac71c29dc7644781c86fe8d887fdac744e5e32a408bf654714b85ecdffbfe68d"} Dec 05 10:44:13 crc kubenswrapper[4796]: I1205 10:44:13.000049 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f0b532d4-1ab1-4d26-ac86-269a32a1bade","Type":"ContainerStarted","Data":"2dc96a9a27d72dd348f689776e282c7834591877436216d24234da0926263ff4"} Dec 05 10:44:13 crc kubenswrapper[4796]: I1205 10:44:13.043262 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.043247142 podStartE2EDuration="2.043247142s" podCreationTimestamp="2025-12-05 10:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:44:13.039674442 +0000 UTC m=+999.327779954" watchObservedRunningTime="2025-12-05 10:44:13.043247142 +0000 UTC m=+999.331352645" Dec 05 10:44:13 crc kubenswrapper[4796]: I1205 10:44:13.071595 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.071578611 podStartE2EDuration="2.071578611s" podCreationTimestamp="2025-12-05 10:44:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:44:13.064325136 +0000 UTC m=+999.352430649" watchObservedRunningTime="2025-12-05 10:44:13.071578611 +0000 UTC m=+999.359684125" Dec 05 10:44:15 crc kubenswrapper[4796]: I1205 10:44:15.332668 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 05 10:44:16 crc kubenswrapper[4796]: I1205 10:44:16.364224 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 10:44:16 crc kubenswrapper[4796]: I1205 10:44:16.364907 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 05 10:44:20 crc kubenswrapper[4796]: I1205 10:44:20.332727 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 05 10:44:20 crc kubenswrapper[4796]: I1205 10:44:20.355862 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 05 10:44:20 crc kubenswrapper[4796]: E1205 10:44:20.958605 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd25dde1a_1eba_4b84_b637_134daea7451e.slice\": RecentStats: unable to find data in memory cache]" Dec 05 10:44:21 crc kubenswrapper[4796]: I1205 10:44:21.087127 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 05 10:44:21 crc kubenswrapper[4796]: I1205 10:44:21.364243 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 10:44:21 crc kubenswrapper[4796]: I1205 10:44:21.364302 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 05 10:44:21 crc kubenswrapper[4796]: I1205 10:44:21.372330 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 10:44:21 crc kubenswrapper[4796]: I1205 10:44:21.372373 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 05 10:44:21 crc kubenswrapper[4796]: I1205 10:44:21.728079 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 05 10:44:22 crc kubenswrapper[4796]: I1205 10:44:22.381822 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 10:44:22 crc kubenswrapper[4796]: I1205 10:44:22.381877 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.200:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 10:44:22 crc kubenswrapper[4796]: I1205 10:44:22.394812 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f0b532d4-1ab1-4d26-ac86-269a32a1bade" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 10:44:22 crc kubenswrapper[4796]: I1205 10:44:22.394845 4796 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f0b532d4-1ab1-4d26-ac86-269a32a1bade" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 05 10:44:31 crc kubenswrapper[4796]: E1205 10:44:31.156530 4796 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd25dde1a_1eba_4b84_b637_134daea7451e.slice\": RecentStats: unable to find data in memory cache]" Dec 05 10:44:31 crc kubenswrapper[4796]: I1205 10:44:31.368046 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 05 10:44:31 crc kubenswrapper[4796]: I1205 10:44:31.369075 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 05 10:44:31 crc kubenswrapper[4796]: I1205 10:44:31.373066 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 05 10:44:31 crc kubenswrapper[4796]: I1205 10:44:31.377509 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 10:44:31 crc kubenswrapper[4796]: I1205 10:44:31.377581 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 05 10:44:31 crc kubenswrapper[4796]: I1205 10:44:31.377878 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 10:44:31 crc kubenswrapper[4796]: I1205 10:44:31.377915 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 05 10:44:31 crc kubenswrapper[4796]: I1205 10:44:31.381208 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 10:44:31 crc kubenswrapper[4796]: I1205 10:44:31.385735 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 05 10:44:32 crc kubenswrapper[4796]: I1205 10:44:32.153501 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 05 10:44:38 crc kubenswrapper[4796]: I1205 10:44:38.591083 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 10:44:39 crc kubenswrapper[4796]: I1205 10:44:39.489008 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 10:44:42 crc kubenswrapper[4796]: I1205 10:44:42.204933 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="1a61d456-9eea-447f-b576-77473222d108" containerName="rabbitmq" containerID="cri-o://02fa5ac098b11c09d932690d9a4be31f5d6f43533d9db4d59a743254bce11892" gracePeriod=604797 Dec 05 10:44:42 crc kubenswrapper[4796]: I1205 10:44:42.600815 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="f735325f-6e38-45a2-a5bd-9ad19c40b36f" containerName="rabbitmq" containerID="cri-o://50196089115ad967e95a3af23289a733957ed5dc09de5e8bb3c60e6f3d6ed797" gracePeriod=604797 Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.602622 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.786089 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"1a61d456-9eea-447f-b576-77473222d108\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.786443 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-erlang-cookie\") pod \"1a61d456-9eea-447f-b576-77473222d108\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.786466 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f68jt\" (UniqueName: \"kubernetes.io/projected/1a61d456-9eea-447f-b576-77473222d108-kube-api-access-f68jt\") pod \"1a61d456-9eea-447f-b576-77473222d108\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.786483 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-tls\") pod \"1a61d456-9eea-447f-b576-77473222d108\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.786510 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-plugins\") pod \"1a61d456-9eea-447f-b576-77473222d108\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.786544 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1a61d456-9eea-447f-b576-77473222d108-erlang-cookie-secret\") pod \"1a61d456-9eea-447f-b576-77473222d108\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.786561 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1a61d456-9eea-447f-b576-77473222d108-server-conf\") pod \"1a61d456-9eea-447f-b576-77473222d108\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.786582 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-confd\") pod \"1a61d456-9eea-447f-b576-77473222d108\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.786605 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1a61d456-9eea-447f-b576-77473222d108-pod-info\") pod \"1a61d456-9eea-447f-b576-77473222d108\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.786644 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1a61d456-9eea-447f-b576-77473222d108-plugins-conf\") pod \"1a61d456-9eea-447f-b576-77473222d108\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.786662 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1a61d456-9eea-447f-b576-77473222d108-config-data\") pod \"1a61d456-9eea-447f-b576-77473222d108\" (UID: \"1a61d456-9eea-447f-b576-77473222d108\") " Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.787450 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "1a61d456-9eea-447f-b576-77473222d108" (UID: "1a61d456-9eea-447f-b576-77473222d108"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.790612 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a61d456-9eea-447f-b576-77473222d108-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "1a61d456-9eea-447f-b576-77473222d108" (UID: "1a61d456-9eea-447f-b576-77473222d108"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.791021 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "1a61d456-9eea-447f-b576-77473222d108" (UID: "1a61d456-9eea-447f-b576-77473222d108"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.791137 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a61d456-9eea-447f-b576-77473222d108-kube-api-access-f68jt" (OuterVolumeSpecName: "kube-api-access-f68jt") pod "1a61d456-9eea-447f-b576-77473222d108" (UID: "1a61d456-9eea-447f-b576-77473222d108"). InnerVolumeSpecName "kube-api-access-f68jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.791856 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "1a61d456-9eea-447f-b576-77473222d108" (UID: "1a61d456-9eea-447f-b576-77473222d108"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.793973 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/1a61d456-9eea-447f-b576-77473222d108-pod-info" (OuterVolumeSpecName: "pod-info") pod "1a61d456-9eea-447f-b576-77473222d108" (UID: "1a61d456-9eea-447f-b576-77473222d108"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.794654 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "1a61d456-9eea-447f-b576-77473222d108" (UID: "1a61d456-9eea-447f-b576-77473222d108"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.807225 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a61d456-9eea-447f-b576-77473222d108-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "1a61d456-9eea-447f-b576-77473222d108" (UID: "1a61d456-9eea-447f-b576-77473222d108"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.821816 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a61d456-9eea-447f-b576-77473222d108-config-data" (OuterVolumeSpecName: "config-data") pod "1a61d456-9eea-447f-b576-77473222d108" (UID: "1a61d456-9eea-447f-b576-77473222d108"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.850328 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a61d456-9eea-447f-b576-77473222d108-server-conf" (OuterVolumeSpecName: "server-conf") pod "1a61d456-9eea-447f-b576-77473222d108" (UID: "1a61d456-9eea-447f-b576-77473222d108"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.888989 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.889019 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f68jt\" (UniqueName: \"kubernetes.io/projected/1a61d456-9eea-447f-b576-77473222d108-kube-api-access-f68jt\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.889030 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.889039 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.889047 4796 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1a61d456-9eea-447f-b576-77473222d108-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.889054 4796 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1a61d456-9eea-447f-b576-77473222d108-server-conf\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.889063 4796 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1a61d456-9eea-447f-b576-77473222d108-pod-info\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.889071 4796 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1a61d456-9eea-447f-b576-77473222d108-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.889079 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1a61d456-9eea-447f-b576-77473222d108-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.889105 4796 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.896074 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "1a61d456-9eea-447f-b576-77473222d108" (UID: "1a61d456-9eea-447f-b576-77473222d108"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.922310 4796 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.933646 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.994512 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1a61d456-9eea-447f-b576-77473222d108-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:48 crc kubenswrapper[4796]: I1205 10:44:48.994549 4796 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.095300 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-plugins\") pod \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.095346 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f735325f-6e38-45a2-a5bd-9ad19c40b36f-pod-info\") pod \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.095378 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f735325f-6e38-45a2-a5bd-9ad19c40b36f-plugins-conf\") pod \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.095424 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f735325f-6e38-45a2-a5bd-9ad19c40b36f-config-data\") pod \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.095511 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc7gp\" (UniqueName: \"kubernetes.io/projected/f735325f-6e38-45a2-a5bd-9ad19c40b36f-kube-api-access-wc7gp\") pod \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.095528 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f735325f-6e38-45a2-a5bd-9ad19c40b36f-server-conf\") pod \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.095562 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f735325f-6e38-45a2-a5bd-9ad19c40b36f-erlang-cookie-secret\") pod \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.095608 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-confd\") pod \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.095621 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-tls\") pod \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.095649 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-erlang-cookie\") pod \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.095665 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\" (UID: \"f735325f-6e38-45a2-a5bd-9ad19c40b36f\") " Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.095747 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f735325f-6e38-45a2-a5bd-9ad19c40b36f" (UID: "f735325f-6e38-45a2-a5bd-9ad19c40b36f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.095962 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f735325f-6e38-45a2-a5bd-9ad19c40b36f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f735325f-6e38-45a2-a5bd-9ad19c40b36f" (UID: "f735325f-6e38-45a2-a5bd-9ad19c40b36f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.096328 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f735325f-6e38-45a2-a5bd-9ad19c40b36f" (UID: "f735325f-6e38-45a2-a5bd-9ad19c40b36f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.100530 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f735325f-6e38-45a2-a5bd-9ad19c40b36f-kube-api-access-wc7gp" (OuterVolumeSpecName: "kube-api-access-wc7gp") pod "f735325f-6e38-45a2-a5bd-9ad19c40b36f" (UID: "f735325f-6e38-45a2-a5bd-9ad19c40b36f"). InnerVolumeSpecName "kube-api-access-wc7gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.101910 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f735325f-6e38-45a2-a5bd-9ad19c40b36f-pod-info" (OuterVolumeSpecName: "pod-info") pod "f735325f-6e38-45a2-a5bd-9ad19c40b36f" (UID: "f735325f-6e38-45a2-a5bd-9ad19c40b36f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.101987 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "f735325f-6e38-45a2-a5bd-9ad19c40b36f" (UID: "f735325f-6e38-45a2-a5bd-9ad19c40b36f"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.102194 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.102265 4796 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f735325f-6e38-45a2-a5bd-9ad19c40b36f-pod-info\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.102325 4796 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f735325f-6e38-45a2-a5bd-9ad19c40b36f-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.102377 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc7gp\" (UniqueName: \"kubernetes.io/projected/f735325f-6e38-45a2-a5bd-9ad19c40b36f-kube-api-access-wc7gp\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.102450 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.102534 4796 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.102760 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f735325f-6e38-45a2-a5bd-9ad19c40b36f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f735325f-6e38-45a2-a5bd-9ad19c40b36f" (UID: "f735325f-6e38-45a2-a5bd-9ad19c40b36f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.103745 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f735325f-6e38-45a2-a5bd-9ad19c40b36f" (UID: "f735325f-6e38-45a2-a5bd-9ad19c40b36f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.118812 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f735325f-6e38-45a2-a5bd-9ad19c40b36f-config-data" (OuterVolumeSpecName: "config-data") pod "f735325f-6e38-45a2-a5bd-9ad19c40b36f" (UID: "f735325f-6e38-45a2-a5bd-9ad19c40b36f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.123752 4796 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.138339 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f735325f-6e38-45a2-a5bd-9ad19c40b36f-server-conf" (OuterVolumeSpecName: "server-conf") pod "f735325f-6e38-45a2-a5bd-9ad19c40b36f" (UID: "f735325f-6e38-45a2-a5bd-9ad19c40b36f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.172809 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f735325f-6e38-45a2-a5bd-9ad19c40b36f" (UID: "f735325f-6e38-45a2-a5bd-9ad19c40b36f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.204629 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.204669 4796 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f735325f-6e38-45a2-a5bd-9ad19c40b36f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.204699 4796 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.204710 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f735325f-6e38-45a2-a5bd-9ad19c40b36f-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.204718 4796 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f735325f-6e38-45a2-a5bd-9ad19c40b36f-server-conf\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.204725 4796 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f735325f-6e38-45a2-a5bd-9ad19c40b36f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.266008 4796 generic.go:334] "Generic (PLEG): container finished" podID="f735325f-6e38-45a2-a5bd-9ad19c40b36f" containerID="50196089115ad967e95a3af23289a733957ed5dc09de5e8bb3c60e6f3d6ed797" exitCode=0 Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.266072 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.266116 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f735325f-6e38-45a2-a5bd-9ad19c40b36f","Type":"ContainerDied","Data":"50196089115ad967e95a3af23289a733957ed5dc09de5e8bb3c60e6f3d6ed797"} Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.266227 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f735325f-6e38-45a2-a5bd-9ad19c40b36f","Type":"ContainerDied","Data":"3b48ae83e0524095cbdd1c3e5a8024047b535d12336875e63b809305fa55ae01"} Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.266249 4796 scope.go:117] "RemoveContainer" containerID="50196089115ad967e95a3af23289a733957ed5dc09de5e8bb3c60e6f3d6ed797" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.270214 4796 generic.go:334] "Generic (PLEG): container finished" podID="1a61d456-9eea-447f-b576-77473222d108" containerID="02fa5ac098b11c09d932690d9a4be31f5d6f43533d9db4d59a743254bce11892" exitCode=0 Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.270248 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1a61d456-9eea-447f-b576-77473222d108","Type":"ContainerDied","Data":"02fa5ac098b11c09d932690d9a4be31f5d6f43533d9db4d59a743254bce11892"} Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.270273 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"1a61d456-9eea-447f-b576-77473222d108","Type":"ContainerDied","Data":"a7c9b0ab82d87d579412d02366af28e420fa8ec0bf60ad411a27483abbd41d01"} Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.270349 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.295646 4796 scope.go:117] "RemoveContainer" containerID="90d497911455b9ebcc3523bfde3fcf72dc248a8643992a0cc697cee2006b0b31" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.306568 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.314630 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.322796 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.332008 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.334382 4796 scope.go:117] "RemoveContainer" containerID="50196089115ad967e95a3af23289a733957ed5dc09de5e8bb3c60e6f3d6ed797" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.335501 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 10:44:49 crc kubenswrapper[4796]: E1205 10:44:49.335903 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f735325f-6e38-45a2-a5bd-9ad19c40b36f" containerName="rabbitmq" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.335921 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f735325f-6e38-45a2-a5bd-9ad19c40b36f" containerName="rabbitmq" Dec 05 10:44:49 crc kubenswrapper[4796]: E1205 10:44:49.335933 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a61d456-9eea-447f-b576-77473222d108" containerName="setup-container" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.335939 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a61d456-9eea-447f-b576-77473222d108" containerName="setup-container" Dec 05 10:44:49 crc kubenswrapper[4796]: E1205 10:44:49.335955 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a61d456-9eea-447f-b576-77473222d108" containerName="rabbitmq" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.335960 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a61d456-9eea-447f-b576-77473222d108" containerName="rabbitmq" Dec 05 10:44:49 crc kubenswrapper[4796]: E1205 10:44:49.335974 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f735325f-6e38-45a2-a5bd-9ad19c40b36f" containerName="setup-container" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.335980 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f735325f-6e38-45a2-a5bd-9ad19c40b36f" containerName="setup-container" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.336137 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a61d456-9eea-447f-b576-77473222d108" containerName="rabbitmq" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.336173 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f735325f-6e38-45a2-a5bd-9ad19c40b36f" containerName="rabbitmq" Dec 05 10:44:49 crc kubenswrapper[4796]: E1205 10:44:49.336316 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50196089115ad967e95a3af23289a733957ed5dc09de5e8bb3c60e6f3d6ed797\": container with ID starting with 50196089115ad967e95a3af23289a733957ed5dc09de5e8bb3c60e6f3d6ed797 not found: ID does not exist" containerID="50196089115ad967e95a3af23289a733957ed5dc09de5e8bb3c60e6f3d6ed797" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.336403 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50196089115ad967e95a3af23289a733957ed5dc09de5e8bb3c60e6f3d6ed797"} err="failed to get container status \"50196089115ad967e95a3af23289a733957ed5dc09de5e8bb3c60e6f3d6ed797\": rpc error: code = NotFound desc = could not find container \"50196089115ad967e95a3af23289a733957ed5dc09de5e8bb3c60e6f3d6ed797\": container with ID starting with 50196089115ad967e95a3af23289a733957ed5dc09de5e8bb3c60e6f3d6ed797 not found: ID does not exist" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.336492 4796 scope.go:117] "RemoveContainer" containerID="90d497911455b9ebcc3523bfde3fcf72dc248a8643992a0cc697cee2006b0b31" Dec 05 10:44:49 crc kubenswrapper[4796]: E1205 10:44:49.336876 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90d497911455b9ebcc3523bfde3fcf72dc248a8643992a0cc697cee2006b0b31\": container with ID starting with 90d497911455b9ebcc3523bfde3fcf72dc248a8643992a0cc697cee2006b0b31 not found: ID does not exist" containerID="90d497911455b9ebcc3523bfde3fcf72dc248a8643992a0cc697cee2006b0b31" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.336914 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90d497911455b9ebcc3523bfde3fcf72dc248a8643992a0cc697cee2006b0b31"} err="failed to get container status \"90d497911455b9ebcc3523bfde3fcf72dc248a8643992a0cc697cee2006b0b31\": rpc error: code = NotFound desc = could not find container \"90d497911455b9ebcc3523bfde3fcf72dc248a8643992a0cc697cee2006b0b31\": container with ID starting with 90d497911455b9ebcc3523bfde3fcf72dc248a8643992a0cc697cee2006b0b31 not found: ID does not exist" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.336945 4796 scope.go:117] "RemoveContainer" containerID="02fa5ac098b11c09d932690d9a4be31f5d6f43533d9db4d59a743254bce11892" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.337054 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.340096 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.340270 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.340386 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-nq4sp" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.340765 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.340910 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.341018 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.341153 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.353751 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.356075 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.359849 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.360457 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.360572 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-jzl2c" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.360724 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.360942 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.361051 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.361056 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.368457 4796 scope.go:117] "RemoveContainer" containerID="41edc440f58fb63cf7fd571cf2e1576f6d345add64c3aa95d2ab5e12423231cf" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.383726 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.401624 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.411346 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.411376 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d179e637-ffa5-41af-9038-6728586665a6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.411393 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d179e637-ffa5-41af-9038-6728586665a6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.411413 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qgfw\" (UniqueName: \"kubernetes.io/projected/d179e637-ffa5-41af-9038-6728586665a6-kube-api-access-6qgfw\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.411444 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d179e637-ffa5-41af-9038-6728586665a6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.411482 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d179e637-ffa5-41af-9038-6728586665a6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.411498 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d179e637-ffa5-41af-9038-6728586665a6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.411517 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d179e637-ffa5-41af-9038-6728586665a6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.411535 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d179e637-ffa5-41af-9038-6728586665a6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.411583 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d179e637-ffa5-41af-9038-6728586665a6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.411603 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d179e637-ffa5-41af-9038-6728586665a6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.437064 4796 scope.go:117] "RemoveContainer" containerID="02fa5ac098b11c09d932690d9a4be31f5d6f43533d9db4d59a743254bce11892" Dec 05 10:44:49 crc kubenswrapper[4796]: E1205 10:44:49.437468 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02fa5ac098b11c09d932690d9a4be31f5d6f43533d9db4d59a743254bce11892\": container with ID starting with 02fa5ac098b11c09d932690d9a4be31f5d6f43533d9db4d59a743254bce11892 not found: ID does not exist" containerID="02fa5ac098b11c09d932690d9a4be31f5d6f43533d9db4d59a743254bce11892" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.437495 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02fa5ac098b11c09d932690d9a4be31f5d6f43533d9db4d59a743254bce11892"} err="failed to get container status \"02fa5ac098b11c09d932690d9a4be31f5d6f43533d9db4d59a743254bce11892\": rpc error: code = NotFound desc = could not find container \"02fa5ac098b11c09d932690d9a4be31f5d6f43533d9db4d59a743254bce11892\": container with ID starting with 02fa5ac098b11c09d932690d9a4be31f5d6f43533d9db4d59a743254bce11892 not found: ID does not exist" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.437514 4796 scope.go:117] "RemoveContainer" containerID="41edc440f58fb63cf7fd571cf2e1576f6d345add64c3aa95d2ab5e12423231cf" Dec 05 10:44:49 crc kubenswrapper[4796]: E1205 10:44:49.437833 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41edc440f58fb63cf7fd571cf2e1576f6d345add64c3aa95d2ab5e12423231cf\": container with ID starting with 41edc440f58fb63cf7fd571cf2e1576f6d345add64c3aa95d2ab5e12423231cf not found: ID does not exist" containerID="41edc440f58fb63cf7fd571cf2e1576f6d345add64c3aa95d2ab5e12423231cf" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.437853 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41edc440f58fb63cf7fd571cf2e1576f6d345add64c3aa95d2ab5e12423231cf"} err="failed to get container status \"41edc440f58fb63cf7fd571cf2e1576f6d345add64c3aa95d2ab5e12423231cf\": rpc error: code = NotFound desc = could not find container \"41edc440f58fb63cf7fd571cf2e1576f6d345add64c3aa95d2ab5e12423231cf\": container with ID starting with 41edc440f58fb63cf7fd571cf2e1576f6d345add64c3aa95d2ab5e12423231cf not found: ID does not exist" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.512729 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76ca18a4-f216-4325-b15a-adda1d95dddd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.512767 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2csv\" (UniqueName: \"kubernetes.io/projected/76ca18a4-f216-4325-b15a-adda1d95dddd-kube-api-access-m2csv\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.512817 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d179e637-ffa5-41af-9038-6728586665a6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.512879 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76ca18a4-f216-4325-b15a-adda1d95dddd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.512928 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d179e637-ffa5-41af-9038-6728586665a6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.513023 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.513054 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76ca18a4-f216-4325-b15a-adda1d95dddd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.513089 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76ca18a4-f216-4325-b15a-adda1d95dddd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.513122 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.513144 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d179e637-ffa5-41af-9038-6728586665a6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.513158 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d179e637-ffa5-41af-9038-6728586665a6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.513177 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d179e637-ffa5-41af-9038-6728586665a6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.513178 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76ca18a4-f216-4325-b15a-adda1d95dddd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.513225 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qgfw\" (UniqueName: \"kubernetes.io/projected/d179e637-ffa5-41af-9038-6728586665a6-kube-api-access-6qgfw\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.513254 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d179e637-ffa5-41af-9038-6728586665a6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.513282 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76ca18a4-f216-4325-b15a-adda1d95dddd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.513302 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76ca18a4-f216-4325-b15a-adda1d95dddd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.513326 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76ca18a4-f216-4325-b15a-adda1d95dddd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.513342 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d179e637-ffa5-41af-9038-6728586665a6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.513359 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76ca18a4-f216-4325-b15a-adda1d95dddd-config-data\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.513375 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d179e637-ffa5-41af-9038-6728586665a6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.513393 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d179e637-ffa5-41af-9038-6728586665a6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.513411 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d179e637-ffa5-41af-9038-6728586665a6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.514195 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.514296 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d179e637-ffa5-41af-9038-6728586665a6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.515448 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d179e637-ffa5-41af-9038-6728586665a6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.515944 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d179e637-ffa5-41af-9038-6728586665a6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.516315 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d179e637-ffa5-41af-9038-6728586665a6-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.520212 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d179e637-ffa5-41af-9038-6728586665a6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.520712 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d179e637-ffa5-41af-9038-6728586665a6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.524352 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d179e637-ffa5-41af-9038-6728586665a6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.528658 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d179e637-ffa5-41af-9038-6728586665a6-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.545286 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qgfw\" (UniqueName: \"kubernetes.io/projected/d179e637-ffa5-41af-9038-6728586665a6-kube-api-access-6qgfw\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.577947 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d179e637-ffa5-41af-9038-6728586665a6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.614859 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76ca18a4-f216-4325-b15a-adda1d95dddd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.614941 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76ca18a4-f216-4325-b15a-adda1d95dddd-config-data\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.615607 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76ca18a4-f216-4325-b15a-adda1d95dddd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.615643 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2csv\" (UniqueName: \"kubernetes.io/projected/76ca18a4-f216-4325-b15a-adda1d95dddd-kube-api-access-m2csv\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.616031 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76ca18a4-f216-4325-b15a-adda1d95dddd-config-data\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.615730 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76ca18a4-f216-4325-b15a-adda1d95dddd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.616116 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76ca18a4-f216-4325-b15a-adda1d95dddd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.616139 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.616211 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76ca18a4-f216-4325-b15a-adda1d95dddd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.616235 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.616327 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76ca18a4-f216-4325-b15a-adda1d95dddd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.616346 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76ca18a4-f216-4325-b15a-adda1d95dddd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.616440 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76ca18a4-f216-4325-b15a-adda1d95dddd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.616500 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76ca18a4-f216-4325-b15a-adda1d95dddd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.616614 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76ca18a4-f216-4325-b15a-adda1d95dddd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.616645 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76ca18a4-f216-4325-b15a-adda1d95dddd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.617626 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76ca18a4-f216-4325-b15a-adda1d95dddd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.619574 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76ca18a4-f216-4325-b15a-adda1d95dddd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.620270 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76ca18a4-f216-4325-b15a-adda1d95dddd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.620315 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76ca18a4-f216-4325-b15a-adda1d95dddd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.620480 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76ca18a4-f216-4325-b15a-adda1d95dddd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.631224 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2csv\" (UniqueName: \"kubernetes.io/projected/76ca18a4-f216-4325-b15a-adda1d95dddd-kube-api-access-m2csv\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.639802 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"76ca18a4-f216-4325-b15a-adda1d95dddd\") " pod="openstack/rabbitmq-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.655344 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:44:49 crc kubenswrapper[4796]: I1205 10:44:49.762623 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.041063 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a61d456-9eea-447f-b576-77473222d108" path="/var/lib/kubelet/pods/1a61d456-9eea-447f-b576-77473222d108/volumes" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.041813 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f735325f-6e38-45a2-a5bd-9ad19c40b36f" path="/var/lib/kubelet/pods/f735325f-6e38-45a2-a5bd-9ad19c40b36f/volumes" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.046557 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.151116 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 05 10:44:50 crc kubenswrapper[4796]: W1205 10:44:50.154744 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76ca18a4_f216_4325_b15a_adda1d95dddd.slice/crio-36c5ba6409f4eda893d7cfda9cb0c8320919d94aaf13e534d4aadce56ff14902 WatchSource:0}: Error finding container 36c5ba6409f4eda893d7cfda9cb0c8320919d94aaf13e534d4aadce56ff14902: Status 404 returned error can't find the container with id 36c5ba6409f4eda893d7cfda9cb0c8320919d94aaf13e534d4aadce56ff14902 Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.276678 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"76ca18a4-f216-4325-b15a-adda1d95dddd","Type":"ContainerStarted","Data":"36c5ba6409f4eda893d7cfda9cb0c8320919d94aaf13e534d4aadce56ff14902"} Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.277758 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d179e637-ffa5-41af-9038-6728586665a6","Type":"ContainerStarted","Data":"6657bae532218d805a245a0a1c618fc69f0152b837bce95fb04e3c29144994af"} Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.508175 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d96bc86b9-9gct4"] Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.509754 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.512079 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.521787 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d96bc86b9-9gct4"] Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.637822 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-dns-swift-storage-0\") pod \"dnsmasq-dns-d96bc86b9-9gct4\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.637879 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-ovsdbserver-sb\") pod \"dnsmasq-dns-d96bc86b9-9gct4\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.637902 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-ovsdbserver-nb\") pod \"dnsmasq-dns-d96bc86b9-9gct4\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.637943 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-config\") pod \"dnsmasq-dns-d96bc86b9-9gct4\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.637974 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-openstack-edpm-ipam\") pod \"dnsmasq-dns-d96bc86b9-9gct4\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.637999 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-dns-svc\") pod \"dnsmasq-dns-d96bc86b9-9gct4\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.638027 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zrfs\" (UniqueName: \"kubernetes.io/projected/65d5dde8-5d15-4c29-8292-29fc06eb308f-kube-api-access-9zrfs\") pod \"dnsmasq-dns-d96bc86b9-9gct4\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.739666 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-openstack-edpm-ipam\") pod \"dnsmasq-dns-d96bc86b9-9gct4\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.739742 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-dns-svc\") pod \"dnsmasq-dns-d96bc86b9-9gct4\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.739783 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zrfs\" (UniqueName: \"kubernetes.io/projected/65d5dde8-5d15-4c29-8292-29fc06eb308f-kube-api-access-9zrfs\") pod \"dnsmasq-dns-d96bc86b9-9gct4\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.739854 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-dns-swift-storage-0\") pod \"dnsmasq-dns-d96bc86b9-9gct4\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.739890 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-ovsdbserver-sb\") pod \"dnsmasq-dns-d96bc86b9-9gct4\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.739909 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-ovsdbserver-nb\") pod \"dnsmasq-dns-d96bc86b9-9gct4\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.739949 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-config\") pod \"dnsmasq-dns-d96bc86b9-9gct4\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.740707 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-dns-svc\") pod \"dnsmasq-dns-d96bc86b9-9gct4\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.740732 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-config\") pod \"dnsmasq-dns-d96bc86b9-9gct4\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.740929 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-dns-swift-storage-0\") pod \"dnsmasq-dns-d96bc86b9-9gct4\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.740966 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-ovsdbserver-sb\") pod \"dnsmasq-dns-d96bc86b9-9gct4\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.741078 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-ovsdbserver-nb\") pod \"dnsmasq-dns-d96bc86b9-9gct4\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.741265 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-openstack-edpm-ipam\") pod \"dnsmasq-dns-d96bc86b9-9gct4\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.756481 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zrfs\" (UniqueName: \"kubernetes.io/projected/65d5dde8-5d15-4c29-8292-29fc06eb308f-kube-api-access-9zrfs\") pod \"dnsmasq-dns-d96bc86b9-9gct4\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:50 crc kubenswrapper[4796]: I1205 10:44:50.826061 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:51 crc kubenswrapper[4796]: I1205 10:44:51.287467 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"76ca18a4-f216-4325-b15a-adda1d95dddd","Type":"ContainerStarted","Data":"0841fb2468dd1de80c4ba76c78ea56e64d6ae4e671a1964587dba66c599ddd17"} Dec 05 10:44:51 crc kubenswrapper[4796]: I1205 10:44:51.289510 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d179e637-ffa5-41af-9038-6728586665a6","Type":"ContainerStarted","Data":"8346e080f5dbfd588dfe7c34fb1fef41c38edd95e9b73a1ba442f5d49234306e"} Dec 05 10:44:51 crc kubenswrapper[4796]: I1205 10:44:51.296563 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d96bc86b9-9gct4"] Dec 05 10:44:51 crc kubenswrapper[4796]: W1205 10:44:51.301071 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod65d5dde8_5d15_4c29_8292_29fc06eb308f.slice/crio-d7c846e8865d3b2bc6e30a1d7a63415fa135efaed9dc233358271ce8c3695d6c WatchSource:0}: Error finding container d7c846e8865d3b2bc6e30a1d7a63415fa135efaed9dc233358271ce8c3695d6c: Status 404 returned error can't find the container with id d7c846e8865d3b2bc6e30a1d7a63415fa135efaed9dc233358271ce8c3695d6c Dec 05 10:44:52 crc kubenswrapper[4796]: I1205 10:44:52.298244 4796 generic.go:334] "Generic (PLEG): container finished" podID="65d5dde8-5d15-4c29-8292-29fc06eb308f" containerID="8e5697c2410d5cef30ca40ef1588718ed51ab359e17eee111a405b16322c6e82" exitCode=0 Dec 05 10:44:52 crc kubenswrapper[4796]: I1205 10:44:52.298336 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" event={"ID":"65d5dde8-5d15-4c29-8292-29fc06eb308f","Type":"ContainerDied","Data":"8e5697c2410d5cef30ca40ef1588718ed51ab359e17eee111a405b16322c6e82"} Dec 05 10:44:52 crc kubenswrapper[4796]: I1205 10:44:52.298551 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" event={"ID":"65d5dde8-5d15-4c29-8292-29fc06eb308f","Type":"ContainerStarted","Data":"d7c846e8865d3b2bc6e30a1d7a63415fa135efaed9dc233358271ce8c3695d6c"} Dec 05 10:44:53 crc kubenswrapper[4796]: I1205 10:44:53.306598 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" event={"ID":"65d5dde8-5d15-4c29-8292-29fc06eb308f","Type":"ContainerStarted","Data":"4ada455c3ebf18388c5671a9afc62a6c8e17a96af0ac5fc058e3b37a36271e8c"} Dec 05 10:44:53 crc kubenswrapper[4796]: I1205 10:44:53.307093 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:44:53 crc kubenswrapper[4796]: I1205 10:44:53.327692 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" podStartSLOduration=3.327670743 podStartE2EDuration="3.327670743s" podCreationTimestamp="2025-12-05 10:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:44:53.32080187 +0000 UTC m=+1039.608907383" watchObservedRunningTime="2025-12-05 10:44:53.327670743 +0000 UTC m=+1039.615776256" Dec 05 10:45:00 crc kubenswrapper[4796]: I1205 10:45:00.135292 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415525-c4rm9"] Dec 05 10:45:00 crc kubenswrapper[4796]: I1205 10:45:00.136790 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415525-c4rm9" Dec 05 10:45:00 crc kubenswrapper[4796]: I1205 10:45:00.138910 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 10:45:00 crc kubenswrapper[4796]: I1205 10:45:00.141591 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 10:45:00 crc kubenswrapper[4796]: I1205 10:45:00.141753 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415525-c4rm9"] Dec 05 10:45:00 crc kubenswrapper[4796]: I1205 10:45:00.287757 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8nj7\" (UniqueName: \"kubernetes.io/projected/f367d4fa-182c-447e-98d4-bd6adffa1ac1-kube-api-access-w8nj7\") pod \"collect-profiles-29415525-c4rm9\" (UID: \"f367d4fa-182c-447e-98d4-bd6adffa1ac1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415525-c4rm9" Dec 05 10:45:00 crc kubenswrapper[4796]: I1205 10:45:00.287815 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f367d4fa-182c-447e-98d4-bd6adffa1ac1-secret-volume\") pod \"collect-profiles-29415525-c4rm9\" (UID: \"f367d4fa-182c-447e-98d4-bd6adffa1ac1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415525-c4rm9" Dec 05 10:45:00 crc kubenswrapper[4796]: I1205 10:45:00.287845 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f367d4fa-182c-447e-98d4-bd6adffa1ac1-config-volume\") pod \"collect-profiles-29415525-c4rm9\" (UID: \"f367d4fa-182c-447e-98d4-bd6adffa1ac1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415525-c4rm9" Dec 05 10:45:00 crc kubenswrapper[4796]: I1205 10:45:00.389429 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f367d4fa-182c-447e-98d4-bd6adffa1ac1-config-volume\") pod \"collect-profiles-29415525-c4rm9\" (UID: \"f367d4fa-182c-447e-98d4-bd6adffa1ac1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415525-c4rm9" Dec 05 10:45:00 crc kubenswrapper[4796]: I1205 10:45:00.389862 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8nj7\" (UniqueName: \"kubernetes.io/projected/f367d4fa-182c-447e-98d4-bd6adffa1ac1-kube-api-access-w8nj7\") pod \"collect-profiles-29415525-c4rm9\" (UID: \"f367d4fa-182c-447e-98d4-bd6adffa1ac1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415525-c4rm9" Dec 05 10:45:00 crc kubenswrapper[4796]: I1205 10:45:00.389898 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f367d4fa-182c-447e-98d4-bd6adffa1ac1-secret-volume\") pod \"collect-profiles-29415525-c4rm9\" (UID: \"f367d4fa-182c-447e-98d4-bd6adffa1ac1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415525-c4rm9" Dec 05 10:45:00 crc kubenswrapper[4796]: I1205 10:45:00.390170 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f367d4fa-182c-447e-98d4-bd6adffa1ac1-config-volume\") pod \"collect-profiles-29415525-c4rm9\" (UID: \"f367d4fa-182c-447e-98d4-bd6adffa1ac1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415525-c4rm9" Dec 05 10:45:00 crc kubenswrapper[4796]: I1205 10:45:00.401289 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f367d4fa-182c-447e-98d4-bd6adffa1ac1-secret-volume\") pod \"collect-profiles-29415525-c4rm9\" (UID: \"f367d4fa-182c-447e-98d4-bd6adffa1ac1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415525-c4rm9" Dec 05 10:45:00 crc kubenswrapper[4796]: I1205 10:45:00.403172 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8nj7\" (UniqueName: \"kubernetes.io/projected/f367d4fa-182c-447e-98d4-bd6adffa1ac1-kube-api-access-w8nj7\") pod \"collect-profiles-29415525-c4rm9\" (UID: \"f367d4fa-182c-447e-98d4-bd6adffa1ac1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415525-c4rm9" Dec 05 10:45:00 crc kubenswrapper[4796]: I1205 10:45:00.451786 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415525-c4rm9" Dec 05 10:45:00 crc kubenswrapper[4796]: I1205 10:45:00.817595 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415525-c4rm9"] Dec 05 10:45:00 crc kubenswrapper[4796]: I1205 10:45:00.827795 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:45:00 crc kubenswrapper[4796]: I1205 10:45:00.877381 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9cbcb645-5ct6c"] Dec 05 10:45:00 crc kubenswrapper[4796]: I1205 10:45:00.877597 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" podUID="1c478894-8932-435f-966f-73b440b0ddab" containerName="dnsmasq-dns" containerID="cri-o://4dd558df598e8d6e4b63799374c39052065e771f89a0cfa2c4e5187eb4fdd2a9" gracePeriod=10 Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.016853 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6574f55bb5-jfc7f"] Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.024036 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.028030 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6574f55bb5-jfc7f"] Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.101927 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0dc9fea5-76b4-465f-9a96-a198004f4c2c-ovsdbserver-nb\") pod \"dnsmasq-dns-6574f55bb5-jfc7f\" (UID: \"0dc9fea5-76b4-465f-9a96-a198004f4c2c\") " pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.101997 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0dc9fea5-76b4-465f-9a96-a198004f4c2c-dns-swift-storage-0\") pod \"dnsmasq-dns-6574f55bb5-jfc7f\" (UID: \"0dc9fea5-76b4-465f-9a96-a198004f4c2c\") " pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.102057 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0dc9fea5-76b4-465f-9a96-a198004f4c2c-dns-svc\") pod \"dnsmasq-dns-6574f55bb5-jfc7f\" (UID: \"0dc9fea5-76b4-465f-9a96-a198004f4c2c\") " pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.102073 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x78j6\" (UniqueName: \"kubernetes.io/projected/0dc9fea5-76b4-465f-9a96-a198004f4c2c-kube-api-access-x78j6\") pod \"dnsmasq-dns-6574f55bb5-jfc7f\" (UID: \"0dc9fea5-76b4-465f-9a96-a198004f4c2c\") " pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.102133 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0dc9fea5-76b4-465f-9a96-a198004f4c2c-ovsdbserver-sb\") pod \"dnsmasq-dns-6574f55bb5-jfc7f\" (UID: \"0dc9fea5-76b4-465f-9a96-a198004f4c2c\") " pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.102157 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0dc9fea5-76b4-465f-9a96-a198004f4c2c-openstack-edpm-ipam\") pod \"dnsmasq-dns-6574f55bb5-jfc7f\" (UID: \"0dc9fea5-76b4-465f-9a96-a198004f4c2c\") " pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.102177 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dc9fea5-76b4-465f-9a96-a198004f4c2c-config\") pod \"dnsmasq-dns-6574f55bb5-jfc7f\" (UID: \"0dc9fea5-76b4-465f-9a96-a198004f4c2c\") " pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.204830 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0dc9fea5-76b4-465f-9a96-a198004f4c2c-dns-swift-storage-0\") pod \"dnsmasq-dns-6574f55bb5-jfc7f\" (UID: \"0dc9fea5-76b4-465f-9a96-a198004f4c2c\") " pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.204978 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x78j6\" (UniqueName: \"kubernetes.io/projected/0dc9fea5-76b4-465f-9a96-a198004f4c2c-kube-api-access-x78j6\") pod \"dnsmasq-dns-6574f55bb5-jfc7f\" (UID: \"0dc9fea5-76b4-465f-9a96-a198004f4c2c\") " pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.205022 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0dc9fea5-76b4-465f-9a96-a198004f4c2c-dns-svc\") pod \"dnsmasq-dns-6574f55bb5-jfc7f\" (UID: \"0dc9fea5-76b4-465f-9a96-a198004f4c2c\") " pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.205125 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0dc9fea5-76b4-465f-9a96-a198004f4c2c-ovsdbserver-sb\") pod \"dnsmasq-dns-6574f55bb5-jfc7f\" (UID: \"0dc9fea5-76b4-465f-9a96-a198004f4c2c\") " pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.205175 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0dc9fea5-76b4-465f-9a96-a198004f4c2c-openstack-edpm-ipam\") pod \"dnsmasq-dns-6574f55bb5-jfc7f\" (UID: \"0dc9fea5-76b4-465f-9a96-a198004f4c2c\") " pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.205208 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dc9fea5-76b4-465f-9a96-a198004f4c2c-config\") pod \"dnsmasq-dns-6574f55bb5-jfc7f\" (UID: \"0dc9fea5-76b4-465f-9a96-a198004f4c2c\") " pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.205330 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0dc9fea5-76b4-465f-9a96-a198004f4c2c-ovsdbserver-nb\") pod \"dnsmasq-dns-6574f55bb5-jfc7f\" (UID: \"0dc9fea5-76b4-465f-9a96-a198004f4c2c\") " pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.206118 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0dc9fea5-76b4-465f-9a96-a198004f4c2c-dns-svc\") pod \"dnsmasq-dns-6574f55bb5-jfc7f\" (UID: \"0dc9fea5-76b4-465f-9a96-a198004f4c2c\") " pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.206226 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0dc9fea5-76b4-465f-9a96-a198004f4c2c-ovsdbserver-sb\") pod \"dnsmasq-dns-6574f55bb5-jfc7f\" (UID: \"0dc9fea5-76b4-465f-9a96-a198004f4c2c\") " pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.206241 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0dc9fea5-76b4-465f-9a96-a198004f4c2c-openstack-edpm-ipam\") pod \"dnsmasq-dns-6574f55bb5-jfc7f\" (UID: \"0dc9fea5-76b4-465f-9a96-a198004f4c2c\") " pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.208564 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0dc9fea5-76b4-465f-9a96-a198004f4c2c-config\") pod \"dnsmasq-dns-6574f55bb5-jfc7f\" (UID: \"0dc9fea5-76b4-465f-9a96-a198004f4c2c\") " pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.208843 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0dc9fea5-76b4-465f-9a96-a198004f4c2c-dns-swift-storage-0\") pod \"dnsmasq-dns-6574f55bb5-jfc7f\" (UID: \"0dc9fea5-76b4-465f-9a96-a198004f4c2c\") " pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.209385 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0dc9fea5-76b4-465f-9a96-a198004f4c2c-ovsdbserver-nb\") pod \"dnsmasq-dns-6574f55bb5-jfc7f\" (UID: \"0dc9fea5-76b4-465f-9a96-a198004f4c2c\") " pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.228358 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x78j6\" (UniqueName: \"kubernetes.io/projected/0dc9fea5-76b4-465f-9a96-a198004f4c2c-kube-api-access-x78j6\") pod \"dnsmasq-dns-6574f55bb5-jfc7f\" (UID: \"0dc9fea5-76b4-465f-9a96-a198004f4c2c\") " pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.249105 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.306426 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ksjv\" (UniqueName: \"kubernetes.io/projected/1c478894-8932-435f-966f-73b440b0ddab-kube-api-access-5ksjv\") pod \"1c478894-8932-435f-966f-73b440b0ddab\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.306539 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-config\") pod \"1c478894-8932-435f-966f-73b440b0ddab\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.306626 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-ovsdbserver-sb\") pod \"1c478894-8932-435f-966f-73b440b0ddab\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.306647 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-ovsdbserver-nb\") pod \"1c478894-8932-435f-966f-73b440b0ddab\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.306712 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-dns-swift-storage-0\") pod \"1c478894-8932-435f-966f-73b440b0ddab\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.306797 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-dns-svc\") pod \"1c478894-8932-435f-966f-73b440b0ddab\" (UID: \"1c478894-8932-435f-966f-73b440b0ddab\") " Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.309574 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c478894-8932-435f-966f-73b440b0ddab-kube-api-access-5ksjv" (OuterVolumeSpecName: "kube-api-access-5ksjv") pod "1c478894-8932-435f-966f-73b440b0ddab" (UID: "1c478894-8932-435f-966f-73b440b0ddab"). InnerVolumeSpecName "kube-api-access-5ksjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.340137 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-config" (OuterVolumeSpecName: "config") pod "1c478894-8932-435f-966f-73b440b0ddab" (UID: "1c478894-8932-435f-966f-73b440b0ddab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.340475 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1c478894-8932-435f-966f-73b440b0ddab" (UID: "1c478894-8932-435f-966f-73b440b0ddab"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.341757 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1c478894-8932-435f-966f-73b440b0ddab" (UID: "1c478894-8932-435f-966f-73b440b0ddab"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.344225 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1c478894-8932-435f-966f-73b440b0ddab" (UID: "1c478894-8932-435f-966f-73b440b0ddab"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.344653 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.345039 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1c478894-8932-435f-966f-73b440b0ddab" (UID: "1c478894-8932-435f-966f-73b440b0ddab"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.360397 4796 generic.go:334] "Generic (PLEG): container finished" podID="1c478894-8932-435f-966f-73b440b0ddab" containerID="4dd558df598e8d6e4b63799374c39052065e771f89a0cfa2c4e5187eb4fdd2a9" exitCode=0 Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.360432 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" event={"ID":"1c478894-8932-435f-966f-73b440b0ddab","Type":"ContainerDied","Data":"4dd558df598e8d6e4b63799374c39052065e771f89a0cfa2c4e5187eb4fdd2a9"} Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.360459 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.360477 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9cbcb645-5ct6c" event={"ID":"1c478894-8932-435f-966f-73b440b0ddab","Type":"ContainerDied","Data":"8a0efd7e18854cef305f17f2eb3fc45d8b11c636b3b8e7a91380572b1e35271d"} Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.360496 4796 scope.go:117] "RemoveContainer" containerID="4dd558df598e8d6e4b63799374c39052065e771f89a0cfa2c4e5187eb4fdd2a9" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.362799 4796 generic.go:334] "Generic (PLEG): container finished" podID="f367d4fa-182c-447e-98d4-bd6adffa1ac1" containerID="9ca710da031e2e4434d334dc4e8f379d384d98674d4e0b1d4882339d437deb46" exitCode=0 Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.362826 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415525-c4rm9" event={"ID":"f367d4fa-182c-447e-98d4-bd6adffa1ac1","Type":"ContainerDied","Data":"9ca710da031e2e4434d334dc4e8f379d384d98674d4e0b1d4882339d437deb46"} Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.362841 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415525-c4rm9" event={"ID":"f367d4fa-182c-447e-98d4-bd6adffa1ac1","Type":"ContainerStarted","Data":"e3ba28cbbc4685d4b79f85aed4816ed8b8aa83b54573bc4818fb65004b291c09"} Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.389129 4796 scope.go:117] "RemoveContainer" containerID="ea2913c389ea4e67c8d764bebea5d5668e7f9a17ac2f8c8e737c28942a7806b4" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.396596 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9cbcb645-5ct6c"] Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.403000 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9cbcb645-5ct6c"] Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.409032 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.409058 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ksjv\" (UniqueName: \"kubernetes.io/projected/1c478894-8932-435f-966f-73b440b0ddab-kube-api-access-5ksjv\") on node \"crc\" DevicePath \"\"" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.409088 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.409098 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.409106 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.409113 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c478894-8932-435f-966f-73b440b0ddab-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.461482 4796 scope.go:117] "RemoveContainer" containerID="4dd558df598e8d6e4b63799374c39052065e771f89a0cfa2c4e5187eb4fdd2a9" Dec 05 10:45:01 crc kubenswrapper[4796]: E1205 10:45:01.468338 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dd558df598e8d6e4b63799374c39052065e771f89a0cfa2c4e5187eb4fdd2a9\": container with ID starting with 4dd558df598e8d6e4b63799374c39052065e771f89a0cfa2c4e5187eb4fdd2a9 not found: ID does not exist" containerID="4dd558df598e8d6e4b63799374c39052065e771f89a0cfa2c4e5187eb4fdd2a9" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.468376 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dd558df598e8d6e4b63799374c39052065e771f89a0cfa2c4e5187eb4fdd2a9"} err="failed to get container status \"4dd558df598e8d6e4b63799374c39052065e771f89a0cfa2c4e5187eb4fdd2a9\": rpc error: code = NotFound desc = could not find container \"4dd558df598e8d6e4b63799374c39052065e771f89a0cfa2c4e5187eb4fdd2a9\": container with ID starting with 4dd558df598e8d6e4b63799374c39052065e771f89a0cfa2c4e5187eb4fdd2a9 not found: ID does not exist" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.468404 4796 scope.go:117] "RemoveContainer" containerID="ea2913c389ea4e67c8d764bebea5d5668e7f9a17ac2f8c8e737c28942a7806b4" Dec 05 10:45:01 crc kubenswrapper[4796]: E1205 10:45:01.471216 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea2913c389ea4e67c8d764bebea5d5668e7f9a17ac2f8c8e737c28942a7806b4\": container with ID starting with ea2913c389ea4e67c8d764bebea5d5668e7f9a17ac2f8c8e737c28942a7806b4 not found: ID does not exist" containerID="ea2913c389ea4e67c8d764bebea5d5668e7f9a17ac2f8c8e737c28942a7806b4" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.471261 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea2913c389ea4e67c8d764bebea5d5668e7f9a17ac2f8c8e737c28942a7806b4"} err="failed to get container status \"ea2913c389ea4e67c8d764bebea5d5668e7f9a17ac2f8c8e737c28942a7806b4\": rpc error: code = NotFound desc = could not find container \"ea2913c389ea4e67c8d764bebea5d5668e7f9a17ac2f8c8e737c28942a7806b4\": container with ID starting with ea2913c389ea4e67c8d764bebea5d5668e7f9a17ac2f8c8e737c28942a7806b4 not found: ID does not exist" Dec 05 10:45:01 crc kubenswrapper[4796]: I1205 10:45:01.729518 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6574f55bb5-jfc7f"] Dec 05 10:45:02 crc kubenswrapper[4796]: I1205 10:45:02.039762 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c478894-8932-435f-966f-73b440b0ddab" path="/var/lib/kubelet/pods/1c478894-8932-435f-966f-73b440b0ddab/volumes" Dec 05 10:45:02 crc kubenswrapper[4796]: I1205 10:45:02.372930 4796 generic.go:334] "Generic (PLEG): container finished" podID="0dc9fea5-76b4-465f-9a96-a198004f4c2c" containerID="d489717e163d20549aea376e446adabfecdbce4b419e0a54ac4c4351006a7736" exitCode=0 Dec 05 10:45:02 crc kubenswrapper[4796]: I1205 10:45:02.372971 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" event={"ID":"0dc9fea5-76b4-465f-9a96-a198004f4c2c","Type":"ContainerDied","Data":"d489717e163d20549aea376e446adabfecdbce4b419e0a54ac4c4351006a7736"} Dec 05 10:45:02 crc kubenswrapper[4796]: I1205 10:45:02.373288 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" event={"ID":"0dc9fea5-76b4-465f-9a96-a198004f4c2c","Type":"ContainerStarted","Data":"b2598c6c83238e8193e8656820c83e57bc547495d1a1f4ad242fac063014b06b"} Dec 05 10:45:02 crc kubenswrapper[4796]: I1205 10:45:02.629805 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415525-c4rm9" Dec 05 10:45:02 crc kubenswrapper[4796]: I1205 10:45:02.742466 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f367d4fa-182c-447e-98d4-bd6adffa1ac1-config-volume\") pod \"f367d4fa-182c-447e-98d4-bd6adffa1ac1\" (UID: \"f367d4fa-182c-447e-98d4-bd6adffa1ac1\") " Dec 05 10:45:02 crc kubenswrapper[4796]: I1205 10:45:02.742764 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8nj7\" (UniqueName: \"kubernetes.io/projected/f367d4fa-182c-447e-98d4-bd6adffa1ac1-kube-api-access-w8nj7\") pod \"f367d4fa-182c-447e-98d4-bd6adffa1ac1\" (UID: \"f367d4fa-182c-447e-98d4-bd6adffa1ac1\") " Dec 05 10:45:02 crc kubenswrapper[4796]: I1205 10:45:02.742974 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f367d4fa-182c-447e-98d4-bd6adffa1ac1-secret-volume\") pod \"f367d4fa-182c-447e-98d4-bd6adffa1ac1\" (UID: \"f367d4fa-182c-447e-98d4-bd6adffa1ac1\") " Dec 05 10:45:02 crc kubenswrapper[4796]: I1205 10:45:02.743055 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f367d4fa-182c-447e-98d4-bd6adffa1ac1-config-volume" (OuterVolumeSpecName: "config-volume") pod "f367d4fa-182c-447e-98d4-bd6adffa1ac1" (UID: "f367d4fa-182c-447e-98d4-bd6adffa1ac1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:45:02 crc kubenswrapper[4796]: I1205 10:45:02.743400 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f367d4fa-182c-447e-98d4-bd6adffa1ac1-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 10:45:02 crc kubenswrapper[4796]: I1205 10:45:02.747328 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f367d4fa-182c-447e-98d4-bd6adffa1ac1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f367d4fa-182c-447e-98d4-bd6adffa1ac1" (UID: "f367d4fa-182c-447e-98d4-bd6adffa1ac1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:45:02 crc kubenswrapper[4796]: I1205 10:45:02.747365 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f367d4fa-182c-447e-98d4-bd6adffa1ac1-kube-api-access-w8nj7" (OuterVolumeSpecName: "kube-api-access-w8nj7") pod "f367d4fa-182c-447e-98d4-bd6adffa1ac1" (UID: "f367d4fa-182c-447e-98d4-bd6adffa1ac1"). InnerVolumeSpecName "kube-api-access-w8nj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:45:02 crc kubenswrapper[4796]: I1205 10:45:02.877880 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8nj7\" (UniqueName: \"kubernetes.io/projected/f367d4fa-182c-447e-98d4-bd6adffa1ac1-kube-api-access-w8nj7\") on node \"crc\" DevicePath \"\"" Dec 05 10:45:02 crc kubenswrapper[4796]: I1205 10:45:02.877907 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f367d4fa-182c-447e-98d4-bd6adffa1ac1-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 10:45:03 crc kubenswrapper[4796]: I1205 10:45:03.382071 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415525-c4rm9" event={"ID":"f367d4fa-182c-447e-98d4-bd6adffa1ac1","Type":"ContainerDied","Data":"e3ba28cbbc4685d4b79f85aed4816ed8b8aa83b54573bc4818fb65004b291c09"} Dec 05 10:45:03 crc kubenswrapper[4796]: I1205 10:45:03.382113 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3ba28cbbc4685d4b79f85aed4816ed8b8aa83b54573bc4818fb65004b291c09" Dec 05 10:45:03 crc kubenswrapper[4796]: I1205 10:45:03.382085 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415525-c4rm9" Dec 05 10:45:03 crc kubenswrapper[4796]: I1205 10:45:03.383891 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" event={"ID":"0dc9fea5-76b4-465f-9a96-a198004f4c2c","Type":"ContainerStarted","Data":"cc5edd16441133888dad41af7ba09ea53aac0f085fa7b51f033ddb1491baf759"} Dec 05 10:45:03 crc kubenswrapper[4796]: I1205 10:45:03.384042 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:03 crc kubenswrapper[4796]: I1205 10:45:03.404325 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" podStartSLOduration=3.404306939 podStartE2EDuration="3.404306939s" podCreationTimestamp="2025-12-05 10:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:45:03.397279207 +0000 UTC m=+1049.685384721" watchObservedRunningTime="2025-12-05 10:45:03.404306939 +0000 UTC m=+1049.692412452" Dec 05 10:45:11 crc kubenswrapper[4796]: I1205 10:45:11.345880 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6574f55bb5-jfc7f" Dec 05 10:45:11 crc kubenswrapper[4796]: I1205 10:45:11.384917 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d96bc86b9-9gct4"] Dec 05 10:45:11 crc kubenswrapper[4796]: I1205 10:45:11.385116 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" podUID="65d5dde8-5d15-4c29-8292-29fc06eb308f" containerName="dnsmasq-dns" containerID="cri-o://4ada455c3ebf18388c5671a9afc62a6c8e17a96af0ac5fc058e3b37a36271e8c" gracePeriod=10 Dec 05 10:45:11 crc kubenswrapper[4796]: I1205 10:45:11.788783 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:45:11 crc kubenswrapper[4796]: I1205 10:45:11.917594 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-ovsdbserver-nb\") pod \"65d5dde8-5d15-4c29-8292-29fc06eb308f\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " Dec 05 10:45:11 crc kubenswrapper[4796]: I1205 10:45:11.917650 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-config\") pod \"65d5dde8-5d15-4c29-8292-29fc06eb308f\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " Dec 05 10:45:11 crc kubenswrapper[4796]: I1205 10:45:11.917758 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-dns-svc\") pod \"65d5dde8-5d15-4c29-8292-29fc06eb308f\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " Dec 05 10:45:11 crc kubenswrapper[4796]: I1205 10:45:11.917790 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-dns-swift-storage-0\") pod \"65d5dde8-5d15-4c29-8292-29fc06eb308f\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " Dec 05 10:45:11 crc kubenswrapper[4796]: I1205 10:45:11.917848 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zrfs\" (UniqueName: \"kubernetes.io/projected/65d5dde8-5d15-4c29-8292-29fc06eb308f-kube-api-access-9zrfs\") pod \"65d5dde8-5d15-4c29-8292-29fc06eb308f\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " Dec 05 10:45:11 crc kubenswrapper[4796]: I1205 10:45:11.918008 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-ovsdbserver-sb\") pod \"65d5dde8-5d15-4c29-8292-29fc06eb308f\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " Dec 05 10:45:11 crc kubenswrapper[4796]: I1205 10:45:11.918028 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-openstack-edpm-ipam\") pod \"65d5dde8-5d15-4c29-8292-29fc06eb308f\" (UID: \"65d5dde8-5d15-4c29-8292-29fc06eb308f\") " Dec 05 10:45:11 crc kubenswrapper[4796]: I1205 10:45:11.924329 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65d5dde8-5d15-4c29-8292-29fc06eb308f-kube-api-access-9zrfs" (OuterVolumeSpecName: "kube-api-access-9zrfs") pod "65d5dde8-5d15-4c29-8292-29fc06eb308f" (UID: "65d5dde8-5d15-4c29-8292-29fc06eb308f"). InnerVolumeSpecName "kube-api-access-9zrfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:45:11 crc kubenswrapper[4796]: I1205 10:45:11.955399 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "65d5dde8-5d15-4c29-8292-29fc06eb308f" (UID: "65d5dde8-5d15-4c29-8292-29fc06eb308f"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:45:11 crc kubenswrapper[4796]: I1205 10:45:11.957094 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "65d5dde8-5d15-4c29-8292-29fc06eb308f" (UID: "65d5dde8-5d15-4c29-8292-29fc06eb308f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:45:11 crc kubenswrapper[4796]: I1205 10:45:11.960439 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "65d5dde8-5d15-4c29-8292-29fc06eb308f" (UID: "65d5dde8-5d15-4c29-8292-29fc06eb308f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:45:11 crc kubenswrapper[4796]: I1205 10:45:11.962749 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "65d5dde8-5d15-4c29-8292-29fc06eb308f" (UID: "65d5dde8-5d15-4c29-8292-29fc06eb308f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:45:11 crc kubenswrapper[4796]: I1205 10:45:11.963500 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "65d5dde8-5d15-4c29-8292-29fc06eb308f" (UID: "65d5dde8-5d15-4c29-8292-29fc06eb308f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:45:11 crc kubenswrapper[4796]: I1205 10:45:11.970235 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-config" (OuterVolumeSpecName: "config") pod "65d5dde8-5d15-4c29-8292-29fc06eb308f" (UID: "65d5dde8-5d15-4c29-8292-29fc06eb308f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:45:12 crc kubenswrapper[4796]: I1205 10:45:12.019778 4796 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 05 10:45:12 crc kubenswrapper[4796]: I1205 10:45:12.019807 4796 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 05 10:45:12 crc kubenswrapper[4796]: I1205 10:45:12.019818 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zrfs\" (UniqueName: \"kubernetes.io/projected/65d5dde8-5d15-4c29-8292-29fc06eb308f-kube-api-access-9zrfs\") on node \"crc\" DevicePath \"\"" Dec 05 10:45:12 crc kubenswrapper[4796]: I1205 10:45:12.019827 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 05 10:45:12 crc kubenswrapper[4796]: I1205 10:45:12.019835 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 05 10:45:12 crc kubenswrapper[4796]: I1205 10:45:12.019844 4796 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 05 10:45:12 crc kubenswrapper[4796]: I1205 10:45:12.019851 4796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65d5dde8-5d15-4c29-8292-29fc06eb308f-config\") on node \"crc\" DevicePath \"\"" Dec 05 10:45:12 crc kubenswrapper[4796]: I1205 10:45:12.447666 4796 generic.go:334] "Generic (PLEG): container finished" podID="65d5dde8-5d15-4c29-8292-29fc06eb308f" containerID="4ada455c3ebf18388c5671a9afc62a6c8e17a96af0ac5fc058e3b37a36271e8c" exitCode=0 Dec 05 10:45:12 crc kubenswrapper[4796]: I1205 10:45:12.447720 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" event={"ID":"65d5dde8-5d15-4c29-8292-29fc06eb308f","Type":"ContainerDied","Data":"4ada455c3ebf18388c5671a9afc62a6c8e17a96af0ac5fc058e3b37a36271e8c"} Dec 05 10:45:12 crc kubenswrapper[4796]: I1205 10:45:12.447746 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" event={"ID":"65d5dde8-5d15-4c29-8292-29fc06eb308f","Type":"ContainerDied","Data":"d7c846e8865d3b2bc6e30a1d7a63415fa135efaed9dc233358271ce8c3695d6c"} Dec 05 10:45:12 crc kubenswrapper[4796]: I1205 10:45:12.447762 4796 scope.go:117] "RemoveContainer" containerID="4ada455c3ebf18388c5671a9afc62a6c8e17a96af0ac5fc058e3b37a36271e8c" Dec 05 10:45:12 crc kubenswrapper[4796]: I1205 10:45:12.447863 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d96bc86b9-9gct4" Dec 05 10:45:12 crc kubenswrapper[4796]: I1205 10:45:12.465162 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d96bc86b9-9gct4"] Dec 05 10:45:12 crc kubenswrapper[4796]: I1205 10:45:12.466075 4796 scope.go:117] "RemoveContainer" containerID="8e5697c2410d5cef30ca40ef1588718ed51ab359e17eee111a405b16322c6e82" Dec 05 10:45:12 crc kubenswrapper[4796]: I1205 10:45:12.472089 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d96bc86b9-9gct4"] Dec 05 10:45:12 crc kubenswrapper[4796]: I1205 10:45:12.485307 4796 scope.go:117] "RemoveContainer" containerID="4ada455c3ebf18388c5671a9afc62a6c8e17a96af0ac5fc058e3b37a36271e8c" Dec 05 10:45:12 crc kubenswrapper[4796]: E1205 10:45:12.485710 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ada455c3ebf18388c5671a9afc62a6c8e17a96af0ac5fc058e3b37a36271e8c\": container with ID starting with 4ada455c3ebf18388c5671a9afc62a6c8e17a96af0ac5fc058e3b37a36271e8c not found: ID does not exist" containerID="4ada455c3ebf18388c5671a9afc62a6c8e17a96af0ac5fc058e3b37a36271e8c" Dec 05 10:45:12 crc kubenswrapper[4796]: I1205 10:45:12.485748 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ada455c3ebf18388c5671a9afc62a6c8e17a96af0ac5fc058e3b37a36271e8c"} err="failed to get container status \"4ada455c3ebf18388c5671a9afc62a6c8e17a96af0ac5fc058e3b37a36271e8c\": rpc error: code = NotFound desc = could not find container \"4ada455c3ebf18388c5671a9afc62a6c8e17a96af0ac5fc058e3b37a36271e8c\": container with ID starting with 4ada455c3ebf18388c5671a9afc62a6c8e17a96af0ac5fc058e3b37a36271e8c not found: ID does not exist" Dec 05 10:45:12 crc kubenswrapper[4796]: I1205 10:45:12.485772 4796 scope.go:117] "RemoveContainer" containerID="8e5697c2410d5cef30ca40ef1588718ed51ab359e17eee111a405b16322c6e82" Dec 05 10:45:12 crc kubenswrapper[4796]: E1205 10:45:12.486081 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e5697c2410d5cef30ca40ef1588718ed51ab359e17eee111a405b16322c6e82\": container with ID starting with 8e5697c2410d5cef30ca40ef1588718ed51ab359e17eee111a405b16322c6e82 not found: ID does not exist" containerID="8e5697c2410d5cef30ca40ef1588718ed51ab359e17eee111a405b16322c6e82" Dec 05 10:45:12 crc kubenswrapper[4796]: I1205 10:45:12.486105 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e5697c2410d5cef30ca40ef1588718ed51ab359e17eee111a405b16322c6e82"} err="failed to get container status \"8e5697c2410d5cef30ca40ef1588718ed51ab359e17eee111a405b16322c6e82\": rpc error: code = NotFound desc = could not find container \"8e5697c2410d5cef30ca40ef1588718ed51ab359e17eee111a405b16322c6e82\": container with ID starting with 8e5697c2410d5cef30ca40ef1588718ed51ab359e17eee111a405b16322c6e82 not found: ID does not exist" Dec 05 10:45:14 crc kubenswrapper[4796]: I1205 10:45:14.038615 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65d5dde8-5d15-4c29-8292-29fc06eb308f" path="/var/lib/kubelet/pods/65d5dde8-5d15-4c29-8292-29fc06eb308f/volumes" Dec 05 10:45:23 crc kubenswrapper[4796]: I1205 10:45:23.522542 4796 generic.go:334] "Generic (PLEG): container finished" podID="76ca18a4-f216-4325-b15a-adda1d95dddd" containerID="0841fb2468dd1de80c4ba76c78ea56e64d6ae4e671a1964587dba66c599ddd17" exitCode=0 Dec 05 10:45:23 crc kubenswrapper[4796]: I1205 10:45:23.522619 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"76ca18a4-f216-4325-b15a-adda1d95dddd","Type":"ContainerDied","Data":"0841fb2468dd1de80c4ba76c78ea56e64d6ae4e671a1964587dba66c599ddd17"} Dec 05 10:45:23 crc kubenswrapper[4796]: I1205 10:45:23.525032 4796 generic.go:334] "Generic (PLEG): container finished" podID="d179e637-ffa5-41af-9038-6728586665a6" containerID="8346e080f5dbfd588dfe7c34fb1fef41c38edd95e9b73a1ba442f5d49234306e" exitCode=0 Dec 05 10:45:23 crc kubenswrapper[4796]: I1205 10:45:23.525075 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d179e637-ffa5-41af-9038-6728586665a6","Type":"ContainerDied","Data":"8346e080f5dbfd588dfe7c34fb1fef41c38edd95e9b73a1ba442f5d49234306e"} Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.482664 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc"] Dec 05 10:45:24 crc kubenswrapper[4796]: E1205 10:45:24.484279 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c478894-8932-435f-966f-73b440b0ddab" containerName="init" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.484303 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c478894-8932-435f-966f-73b440b0ddab" containerName="init" Dec 05 10:45:24 crc kubenswrapper[4796]: E1205 10:45:24.484323 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d5dde8-5d15-4c29-8292-29fc06eb308f" containerName="dnsmasq-dns" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.484329 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d5dde8-5d15-4c29-8292-29fc06eb308f" containerName="dnsmasq-dns" Dec 05 10:45:24 crc kubenswrapper[4796]: E1205 10:45:24.484347 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c478894-8932-435f-966f-73b440b0ddab" containerName="dnsmasq-dns" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.484352 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c478894-8932-435f-966f-73b440b0ddab" containerName="dnsmasq-dns" Dec 05 10:45:24 crc kubenswrapper[4796]: E1205 10:45:24.484366 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f367d4fa-182c-447e-98d4-bd6adffa1ac1" containerName="collect-profiles" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.484373 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="f367d4fa-182c-447e-98d4-bd6adffa1ac1" containerName="collect-profiles" Dec 05 10:45:24 crc kubenswrapper[4796]: E1205 10:45:24.484387 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d5dde8-5d15-4c29-8292-29fc06eb308f" containerName="init" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.484393 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d5dde8-5d15-4c29-8292-29fc06eb308f" containerName="init" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.484622 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c478894-8932-435f-966f-73b440b0ddab" containerName="dnsmasq-dns" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.484643 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="f367d4fa-182c-447e-98d4-bd6adffa1ac1" containerName="collect-profiles" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.484653 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="65d5dde8-5d15-4c29-8292-29fc06eb308f" containerName="dnsmasq-dns" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.485264 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.488179 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.488410 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vmswc" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.488426 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.488592 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.503590 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc"] Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.535106 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"76ca18a4-f216-4325-b15a-adda1d95dddd","Type":"ContainerStarted","Data":"49ed7d9a0448d7b4bee91727095b4f21689b2f2aad29cf55d190a185a7443aa7"} Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.535314 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.537146 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d179e637-ffa5-41af-9038-6728586665a6","Type":"ContainerStarted","Data":"a1924202b9bf878a89fdbb69c276f65859bf01ddf1c922dfedae11234e34c442"} Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.537570 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.573573 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=35.573558321 podStartE2EDuration="35.573558321s" podCreationTimestamp="2025-12-05 10:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:45:24.569844854 +0000 UTC m=+1070.857950368" watchObservedRunningTime="2025-12-05 10:45:24.573558321 +0000 UTC m=+1070.861663834" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.574385 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=35.574379086 podStartE2EDuration="35.574379086s" podCreationTimestamp="2025-12-05 10:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 10:45:24.554886704 +0000 UTC m=+1070.842992228" watchObservedRunningTime="2025-12-05 10:45:24.574379086 +0000 UTC m=+1070.862484599" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.620476 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt5sl\" (UniqueName: \"kubernetes.io/projected/224a1954-bad6-417b-8942-de297ca3195c-kube-api-access-mt5sl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc\" (UID: \"224a1954-bad6-417b-8942-de297ca3195c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.620520 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/224a1954-bad6-417b-8942-de297ca3195c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc\" (UID: \"224a1954-bad6-417b-8942-de297ca3195c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.620604 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/224a1954-bad6-417b-8942-de297ca3195c-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc\" (UID: \"224a1954-bad6-417b-8942-de297ca3195c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.620628 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/224a1954-bad6-417b-8942-de297ca3195c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc\" (UID: \"224a1954-bad6-417b-8942-de297ca3195c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.722522 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/224a1954-bad6-417b-8942-de297ca3195c-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc\" (UID: \"224a1954-bad6-417b-8942-de297ca3195c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.722573 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/224a1954-bad6-417b-8942-de297ca3195c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc\" (UID: \"224a1954-bad6-417b-8942-de297ca3195c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.722741 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt5sl\" (UniqueName: \"kubernetes.io/projected/224a1954-bad6-417b-8942-de297ca3195c-kube-api-access-mt5sl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc\" (UID: \"224a1954-bad6-417b-8942-de297ca3195c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.722793 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/224a1954-bad6-417b-8942-de297ca3195c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc\" (UID: \"224a1954-bad6-417b-8942-de297ca3195c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.727369 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/224a1954-bad6-417b-8942-de297ca3195c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc\" (UID: \"224a1954-bad6-417b-8942-de297ca3195c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.738048 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/224a1954-bad6-417b-8942-de297ca3195c-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc\" (UID: \"224a1954-bad6-417b-8942-de297ca3195c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.738108 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/224a1954-bad6-417b-8942-de297ca3195c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc\" (UID: \"224a1954-bad6-417b-8942-de297ca3195c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.740343 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt5sl\" (UniqueName: \"kubernetes.io/projected/224a1954-bad6-417b-8942-de297ca3195c-kube-api-access-mt5sl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc\" (UID: \"224a1954-bad6-417b-8942-de297ca3195c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc" Dec 05 10:45:24 crc kubenswrapper[4796]: I1205 10:45:24.801110 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc" Dec 05 10:45:25 crc kubenswrapper[4796]: I1205 10:45:25.263765 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc"] Dec 05 10:45:25 crc kubenswrapper[4796]: I1205 10:45:25.545053 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc" event={"ID":"224a1954-bad6-417b-8942-de297ca3195c","Type":"ContainerStarted","Data":"deabc197d573370ccf2f343c66e03ebff7d993f2b51d402acb77e5daf9edf136"} Dec 05 10:45:32 crc kubenswrapper[4796]: I1205 10:45:32.610745 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc" event={"ID":"224a1954-bad6-417b-8942-de297ca3195c","Type":"ContainerStarted","Data":"03eda0a0184d81c96caf1f569a5f7b704711a755ca19cc58444cf9a05f5a2a84"} Dec 05 10:45:32 crc kubenswrapper[4796]: I1205 10:45:32.622918 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc" podStartSLOduration=1.639280962 podStartE2EDuration="8.622903577s" podCreationTimestamp="2025-12-05 10:45:24 +0000 UTC" firstStartedPulling="2025-12-05 10:45:25.267507914 +0000 UTC m=+1071.555613427" lastFinishedPulling="2025-12-05 10:45:32.251130528 +0000 UTC m=+1078.539236042" observedRunningTime="2025-12-05 10:45:32.620976802 +0000 UTC m=+1078.909082315" watchObservedRunningTime="2025-12-05 10:45:32.622903577 +0000 UTC m=+1078.911009090" Dec 05 10:45:35 crc kubenswrapper[4796]: I1205 10:45:35.177087 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:45:35 crc kubenswrapper[4796]: I1205 10:45:35.177657 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:45:39 crc kubenswrapper[4796]: I1205 10:45:39.658613 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 05 10:45:39 crc kubenswrapper[4796]: I1205 10:45:39.766299 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 05 10:45:43 crc kubenswrapper[4796]: I1205 10:45:43.690267 4796 generic.go:334] "Generic (PLEG): container finished" podID="224a1954-bad6-417b-8942-de297ca3195c" containerID="03eda0a0184d81c96caf1f569a5f7b704711a755ca19cc58444cf9a05f5a2a84" exitCode=0 Dec 05 10:45:43 crc kubenswrapper[4796]: I1205 10:45:43.690356 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc" event={"ID":"224a1954-bad6-417b-8942-de297ca3195c","Type":"ContainerDied","Data":"03eda0a0184d81c96caf1f569a5f7b704711a755ca19cc58444cf9a05f5a2a84"} Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.010156 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.196506 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/224a1954-bad6-417b-8942-de297ca3195c-repo-setup-combined-ca-bundle\") pod \"224a1954-bad6-417b-8942-de297ca3195c\" (UID: \"224a1954-bad6-417b-8942-de297ca3195c\") " Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.196562 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/224a1954-bad6-417b-8942-de297ca3195c-inventory\") pod \"224a1954-bad6-417b-8942-de297ca3195c\" (UID: \"224a1954-bad6-417b-8942-de297ca3195c\") " Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.196705 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt5sl\" (UniqueName: \"kubernetes.io/projected/224a1954-bad6-417b-8942-de297ca3195c-kube-api-access-mt5sl\") pod \"224a1954-bad6-417b-8942-de297ca3195c\" (UID: \"224a1954-bad6-417b-8942-de297ca3195c\") " Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.196815 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/224a1954-bad6-417b-8942-de297ca3195c-ssh-key\") pod \"224a1954-bad6-417b-8942-de297ca3195c\" (UID: \"224a1954-bad6-417b-8942-de297ca3195c\") " Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.201256 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/224a1954-bad6-417b-8942-de297ca3195c-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "224a1954-bad6-417b-8942-de297ca3195c" (UID: "224a1954-bad6-417b-8942-de297ca3195c"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.201431 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/224a1954-bad6-417b-8942-de297ca3195c-kube-api-access-mt5sl" (OuterVolumeSpecName: "kube-api-access-mt5sl") pod "224a1954-bad6-417b-8942-de297ca3195c" (UID: "224a1954-bad6-417b-8942-de297ca3195c"). InnerVolumeSpecName "kube-api-access-mt5sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.218293 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/224a1954-bad6-417b-8942-de297ca3195c-inventory" (OuterVolumeSpecName: "inventory") pod "224a1954-bad6-417b-8942-de297ca3195c" (UID: "224a1954-bad6-417b-8942-de297ca3195c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.218603 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/224a1954-bad6-417b-8942-de297ca3195c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "224a1954-bad6-417b-8942-de297ca3195c" (UID: "224a1954-bad6-417b-8942-de297ca3195c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.299073 4796 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/224a1954-bad6-417b-8942-de297ca3195c-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.299099 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/224a1954-bad6-417b-8942-de297ca3195c-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.299110 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt5sl\" (UniqueName: \"kubernetes.io/projected/224a1954-bad6-417b-8942-de297ca3195c-kube-api-access-mt5sl\") on node \"crc\" DevicePath \"\"" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.299119 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/224a1954-bad6-417b-8942-de297ca3195c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.706828 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc" event={"ID":"224a1954-bad6-417b-8942-de297ca3195c","Type":"ContainerDied","Data":"deabc197d573370ccf2f343c66e03ebff7d993f2b51d402acb77e5daf9edf136"} Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.706868 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="deabc197d573370ccf2f343c66e03ebff7d993f2b51d402acb77e5daf9edf136" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.706883 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.757694 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-rqd97"] Dec 05 10:45:45 crc kubenswrapper[4796]: E1205 10:45:45.758058 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="224a1954-bad6-417b-8942-de297ca3195c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.758076 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="224a1954-bad6-417b-8942-de297ca3195c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.758254 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="224a1954-bad6-417b-8942-de297ca3195c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.758851 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rqd97" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.760853 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.760900 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.761035 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vmswc" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.761503 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.774346 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-rqd97"] Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.805404 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c43b4994-6331-4ed4-9180-1b32253929cf-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rqd97\" (UID: \"c43b4994-6331-4ed4-9180-1b32253929cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rqd97" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.805462 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c43b4994-6331-4ed4-9180-1b32253929cf-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rqd97\" (UID: \"c43b4994-6331-4ed4-9180-1b32253929cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rqd97" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.805589 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b97fq\" (UniqueName: \"kubernetes.io/projected/c43b4994-6331-4ed4-9180-1b32253929cf-kube-api-access-b97fq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rqd97\" (UID: \"c43b4994-6331-4ed4-9180-1b32253929cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rqd97" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.906410 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c43b4994-6331-4ed4-9180-1b32253929cf-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rqd97\" (UID: \"c43b4994-6331-4ed4-9180-1b32253929cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rqd97" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.906462 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c43b4994-6331-4ed4-9180-1b32253929cf-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rqd97\" (UID: \"c43b4994-6331-4ed4-9180-1b32253929cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rqd97" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.906512 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b97fq\" (UniqueName: \"kubernetes.io/projected/c43b4994-6331-4ed4-9180-1b32253929cf-kube-api-access-b97fq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rqd97\" (UID: \"c43b4994-6331-4ed4-9180-1b32253929cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rqd97" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.910290 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c43b4994-6331-4ed4-9180-1b32253929cf-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rqd97\" (UID: \"c43b4994-6331-4ed4-9180-1b32253929cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rqd97" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.910517 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c43b4994-6331-4ed4-9180-1b32253929cf-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rqd97\" (UID: \"c43b4994-6331-4ed4-9180-1b32253929cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rqd97" Dec 05 10:45:45 crc kubenswrapper[4796]: I1205 10:45:45.919964 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b97fq\" (UniqueName: \"kubernetes.io/projected/c43b4994-6331-4ed4-9180-1b32253929cf-kube-api-access-b97fq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-rqd97\" (UID: \"c43b4994-6331-4ed4-9180-1b32253929cf\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rqd97" Dec 05 10:45:46 crc kubenswrapper[4796]: I1205 10:45:46.070946 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rqd97" Dec 05 10:45:46 crc kubenswrapper[4796]: I1205 10:45:46.497291 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-rqd97"] Dec 05 10:45:46 crc kubenswrapper[4796]: I1205 10:45:46.715843 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rqd97" event={"ID":"c43b4994-6331-4ed4-9180-1b32253929cf","Type":"ContainerStarted","Data":"2895256bacb60d62abd452dd59e15bb60b6b1384982765516a232a0dc4e494d5"} Dec 05 10:45:47 crc kubenswrapper[4796]: I1205 10:45:47.724244 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rqd97" event={"ID":"c43b4994-6331-4ed4-9180-1b32253929cf","Type":"ContainerStarted","Data":"e6439b4a63251eedf06fc3ceaf62185ab9854dcb317be5a6da621cd224061025"} Dec 05 10:45:47 crc kubenswrapper[4796]: I1205 10:45:47.738722 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rqd97" podStartSLOduration=2.267117021 podStartE2EDuration="2.738701182s" podCreationTimestamp="2025-12-05 10:45:45 +0000 UTC" firstStartedPulling="2025-12-05 10:45:46.503208746 +0000 UTC m=+1092.791314250" lastFinishedPulling="2025-12-05 10:45:46.974792898 +0000 UTC m=+1093.262898411" observedRunningTime="2025-12-05 10:45:47.735076003 +0000 UTC m=+1094.023181517" watchObservedRunningTime="2025-12-05 10:45:47.738701182 +0000 UTC m=+1094.026806696" Dec 05 10:45:49 crc kubenswrapper[4796]: I1205 10:45:49.738655 4796 generic.go:334] "Generic (PLEG): container finished" podID="c43b4994-6331-4ed4-9180-1b32253929cf" containerID="e6439b4a63251eedf06fc3ceaf62185ab9854dcb317be5a6da621cd224061025" exitCode=0 Dec 05 10:45:49 crc kubenswrapper[4796]: I1205 10:45:49.738712 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rqd97" event={"ID":"c43b4994-6331-4ed4-9180-1b32253929cf","Type":"ContainerDied","Data":"e6439b4a63251eedf06fc3ceaf62185ab9854dcb317be5a6da621cd224061025"} Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.060777 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rqd97" Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.190235 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b97fq\" (UniqueName: \"kubernetes.io/projected/c43b4994-6331-4ed4-9180-1b32253929cf-kube-api-access-b97fq\") pod \"c43b4994-6331-4ed4-9180-1b32253929cf\" (UID: \"c43b4994-6331-4ed4-9180-1b32253929cf\") " Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.190371 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c43b4994-6331-4ed4-9180-1b32253929cf-inventory\") pod \"c43b4994-6331-4ed4-9180-1b32253929cf\" (UID: \"c43b4994-6331-4ed4-9180-1b32253929cf\") " Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.190567 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c43b4994-6331-4ed4-9180-1b32253929cf-ssh-key\") pod \"c43b4994-6331-4ed4-9180-1b32253929cf\" (UID: \"c43b4994-6331-4ed4-9180-1b32253929cf\") " Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.195374 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c43b4994-6331-4ed4-9180-1b32253929cf-kube-api-access-b97fq" (OuterVolumeSpecName: "kube-api-access-b97fq") pod "c43b4994-6331-4ed4-9180-1b32253929cf" (UID: "c43b4994-6331-4ed4-9180-1b32253929cf"). InnerVolumeSpecName "kube-api-access-b97fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.214031 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c43b4994-6331-4ed4-9180-1b32253929cf-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c43b4994-6331-4ed4-9180-1b32253929cf" (UID: "c43b4994-6331-4ed4-9180-1b32253929cf"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.214527 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c43b4994-6331-4ed4-9180-1b32253929cf-inventory" (OuterVolumeSpecName: "inventory") pod "c43b4994-6331-4ed4-9180-1b32253929cf" (UID: "c43b4994-6331-4ed4-9180-1b32253929cf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.294063 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c43b4994-6331-4ed4-9180-1b32253929cf-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.294088 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b97fq\" (UniqueName: \"kubernetes.io/projected/c43b4994-6331-4ed4-9180-1b32253929cf-kube-api-access-b97fq\") on node \"crc\" DevicePath \"\"" Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.294110 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c43b4994-6331-4ed4-9180-1b32253929cf-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.753511 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rqd97" event={"ID":"c43b4994-6331-4ed4-9180-1b32253929cf","Type":"ContainerDied","Data":"2895256bacb60d62abd452dd59e15bb60b6b1384982765516a232a0dc4e494d5"} Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.753767 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2895256bacb60d62abd452dd59e15bb60b6b1384982765516a232a0dc4e494d5" Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.753565 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-rqd97" Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.803839 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c"] Dec 05 10:45:51 crc kubenswrapper[4796]: E1205 10:45:51.804204 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c43b4994-6331-4ed4-9180-1b32253929cf" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.804222 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c43b4994-6331-4ed4-9180-1b32253929cf" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.804368 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c43b4994-6331-4ed4-9180-1b32253929cf" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.804965 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c" Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.810889 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c"] Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.811258 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.811370 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.811482 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vmswc" Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.811526 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.903120 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btcq4\" (UniqueName: \"kubernetes.io/projected/9ce2ee96-991c-49bc-b64d-1dee82bc425a-kube-api-access-btcq4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c\" (UID: \"9ce2ee96-991c-49bc-b64d-1dee82bc425a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c" Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.903225 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ce2ee96-991c-49bc-b64d-1dee82bc425a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c\" (UID: \"9ce2ee96-991c-49bc-b64d-1dee82bc425a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c" Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.903278 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce2ee96-991c-49bc-b64d-1dee82bc425a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c\" (UID: \"9ce2ee96-991c-49bc-b64d-1dee82bc425a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c" Dec 05 10:45:51 crc kubenswrapper[4796]: I1205 10:45:51.903296 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ce2ee96-991c-49bc-b64d-1dee82bc425a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c\" (UID: \"9ce2ee96-991c-49bc-b64d-1dee82bc425a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c" Dec 05 10:45:52 crc kubenswrapper[4796]: I1205 10:45:52.005610 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ce2ee96-991c-49bc-b64d-1dee82bc425a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c\" (UID: \"9ce2ee96-991c-49bc-b64d-1dee82bc425a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c" Dec 05 10:45:52 crc kubenswrapper[4796]: I1205 10:45:52.005700 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce2ee96-991c-49bc-b64d-1dee82bc425a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c\" (UID: \"9ce2ee96-991c-49bc-b64d-1dee82bc425a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c" Dec 05 10:45:52 crc kubenswrapper[4796]: I1205 10:45:52.005725 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ce2ee96-991c-49bc-b64d-1dee82bc425a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c\" (UID: \"9ce2ee96-991c-49bc-b64d-1dee82bc425a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c" Dec 05 10:45:52 crc kubenswrapper[4796]: I1205 10:45:52.005792 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btcq4\" (UniqueName: \"kubernetes.io/projected/9ce2ee96-991c-49bc-b64d-1dee82bc425a-kube-api-access-btcq4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c\" (UID: \"9ce2ee96-991c-49bc-b64d-1dee82bc425a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c" Dec 05 10:45:52 crc kubenswrapper[4796]: I1205 10:45:52.010457 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ce2ee96-991c-49bc-b64d-1dee82bc425a-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c\" (UID: \"9ce2ee96-991c-49bc-b64d-1dee82bc425a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c" Dec 05 10:45:52 crc kubenswrapper[4796]: I1205 10:45:52.010551 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ce2ee96-991c-49bc-b64d-1dee82bc425a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c\" (UID: \"9ce2ee96-991c-49bc-b64d-1dee82bc425a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c" Dec 05 10:45:52 crc kubenswrapper[4796]: I1205 10:45:52.010613 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce2ee96-991c-49bc-b64d-1dee82bc425a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c\" (UID: \"9ce2ee96-991c-49bc-b64d-1dee82bc425a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c" Dec 05 10:45:52 crc kubenswrapper[4796]: I1205 10:45:52.018993 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btcq4\" (UniqueName: \"kubernetes.io/projected/9ce2ee96-991c-49bc-b64d-1dee82bc425a-kube-api-access-btcq4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c\" (UID: \"9ce2ee96-991c-49bc-b64d-1dee82bc425a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c" Dec 05 10:45:52 crc kubenswrapper[4796]: I1205 10:45:52.119372 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c" Dec 05 10:45:52 crc kubenswrapper[4796]: I1205 10:45:52.553899 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c"] Dec 05 10:45:52 crc kubenswrapper[4796]: I1205 10:45:52.761646 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c" event={"ID":"9ce2ee96-991c-49bc-b64d-1dee82bc425a","Type":"ContainerStarted","Data":"7ff553dbd8f04603bfebf14e9cfc6d7780d08178ba3deea43dfe07f2b271f568"} Dec 05 10:45:53 crc kubenswrapper[4796]: I1205 10:45:53.782312 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c" event={"ID":"9ce2ee96-991c-49bc-b64d-1dee82bc425a","Type":"ContainerStarted","Data":"095c4c99319ce8a100047f9fdb683c3ffb585976342fc6cbd0e1058fdb97996b"} Dec 05 10:45:53 crc kubenswrapper[4796]: I1205 10:45:53.799577 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c" podStartSLOduration=2.27140793 podStartE2EDuration="2.799560282s" podCreationTimestamp="2025-12-05 10:45:51 +0000 UTC" firstStartedPulling="2025-12-05 10:45:52.558520078 +0000 UTC m=+1098.846625592" lastFinishedPulling="2025-12-05 10:45:53.086672431 +0000 UTC m=+1099.374777944" observedRunningTime="2025-12-05 10:45:53.795677156 +0000 UTC m=+1100.083782669" watchObservedRunningTime="2025-12-05 10:45:53.799560282 +0000 UTC m=+1100.087665794" Dec 05 10:46:05 crc kubenswrapper[4796]: I1205 10:46:05.176877 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:46:05 crc kubenswrapper[4796]: I1205 10:46:05.177339 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:46:35 crc kubenswrapper[4796]: I1205 10:46:35.177400 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:46:35 crc kubenswrapper[4796]: I1205 10:46:35.177884 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:46:35 crc kubenswrapper[4796]: I1205 10:46:35.177929 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 10:46:35 crc kubenswrapper[4796]: I1205 10:46:35.178767 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9b9837e0a00511791d7723ec246d4ce28520c7946787df15f8db71f53bf3790d"} pod="openshift-machine-config-operator/machine-config-daemon-9pllw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 10:46:35 crc kubenswrapper[4796]: I1205 10:46:35.178823 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" containerID="cri-o://9b9837e0a00511791d7723ec246d4ce28520c7946787df15f8db71f53bf3790d" gracePeriod=600 Dec 05 10:46:36 crc kubenswrapper[4796]: I1205 10:46:36.073319 4796 generic.go:334] "Generic (PLEG): container finished" podID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerID="9b9837e0a00511791d7723ec246d4ce28520c7946787df15f8db71f53bf3790d" exitCode=0 Dec 05 10:46:36 crc kubenswrapper[4796]: I1205 10:46:36.073437 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerDied","Data":"9b9837e0a00511791d7723ec246d4ce28520c7946787df15f8db71f53bf3790d"} Dec 05 10:46:36 crc kubenswrapper[4796]: I1205 10:46:36.073935 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerStarted","Data":"960ca04fdaec51a11e5104bd66b840084c3f85c581e931a71e4da412d7f92d59"} Dec 05 10:46:36 crc kubenswrapper[4796]: I1205 10:46:36.073958 4796 scope.go:117] "RemoveContainer" containerID="43e07b991ca33a9b26481e28d45698e5b0116b0edd51039d2f5f22853ca65e61" Dec 05 10:47:36 crc kubenswrapper[4796]: I1205 10:47:36.012158 4796 scope.go:117] "RemoveContainer" containerID="e1b8948c49705b6c44a7145ef1a9315332d8d905e50ecd667137804c46528764" Dec 05 10:47:36 crc kubenswrapper[4796]: I1205 10:47:36.045495 4796 scope.go:117] "RemoveContainer" containerID="4d5cc9de55b28e31527b0fdde95c3aae73d3c3d6071c3c4a663d32d515885afb" Dec 05 10:48:35 crc kubenswrapper[4796]: I1205 10:48:35.177575 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:48:35 crc kubenswrapper[4796]: I1205 10:48:35.178207 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:49:04 crc kubenswrapper[4796]: I1205 10:49:04.274723 4796 generic.go:334] "Generic (PLEG): container finished" podID="9ce2ee96-991c-49bc-b64d-1dee82bc425a" containerID="095c4c99319ce8a100047f9fdb683c3ffb585976342fc6cbd0e1058fdb97996b" exitCode=0 Dec 05 10:49:04 crc kubenswrapper[4796]: I1205 10:49:04.274792 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c" event={"ID":"9ce2ee96-991c-49bc-b64d-1dee82bc425a","Type":"ContainerDied","Data":"095c4c99319ce8a100047f9fdb683c3ffb585976342fc6cbd0e1058fdb97996b"} Dec 05 10:49:05 crc kubenswrapper[4796]: I1205 10:49:05.177061 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:49:05 crc kubenswrapper[4796]: I1205 10:49:05.177127 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:49:05 crc kubenswrapper[4796]: I1205 10:49:05.599360 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c" Dec 05 10:49:05 crc kubenswrapper[4796]: I1205 10:49:05.723831 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ce2ee96-991c-49bc-b64d-1dee82bc425a-inventory\") pod \"9ce2ee96-991c-49bc-b64d-1dee82bc425a\" (UID: \"9ce2ee96-991c-49bc-b64d-1dee82bc425a\") " Dec 05 10:49:05 crc kubenswrapper[4796]: I1205 10:49:05.723983 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btcq4\" (UniqueName: \"kubernetes.io/projected/9ce2ee96-991c-49bc-b64d-1dee82bc425a-kube-api-access-btcq4\") pod \"9ce2ee96-991c-49bc-b64d-1dee82bc425a\" (UID: \"9ce2ee96-991c-49bc-b64d-1dee82bc425a\") " Dec 05 10:49:05 crc kubenswrapper[4796]: I1205 10:49:05.724156 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ce2ee96-991c-49bc-b64d-1dee82bc425a-ssh-key\") pod \"9ce2ee96-991c-49bc-b64d-1dee82bc425a\" (UID: \"9ce2ee96-991c-49bc-b64d-1dee82bc425a\") " Dec 05 10:49:05 crc kubenswrapper[4796]: I1205 10:49:05.724293 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce2ee96-991c-49bc-b64d-1dee82bc425a-bootstrap-combined-ca-bundle\") pod \"9ce2ee96-991c-49bc-b64d-1dee82bc425a\" (UID: \"9ce2ee96-991c-49bc-b64d-1dee82bc425a\") " Dec 05 10:49:05 crc kubenswrapper[4796]: I1205 10:49:05.730284 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce2ee96-991c-49bc-b64d-1dee82bc425a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "9ce2ee96-991c-49bc-b64d-1dee82bc425a" (UID: "9ce2ee96-991c-49bc-b64d-1dee82bc425a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:49:05 crc kubenswrapper[4796]: I1205 10:49:05.730364 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ce2ee96-991c-49bc-b64d-1dee82bc425a-kube-api-access-btcq4" (OuterVolumeSpecName: "kube-api-access-btcq4") pod "9ce2ee96-991c-49bc-b64d-1dee82bc425a" (UID: "9ce2ee96-991c-49bc-b64d-1dee82bc425a"). InnerVolumeSpecName "kube-api-access-btcq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:49:05 crc kubenswrapper[4796]: I1205 10:49:05.748860 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce2ee96-991c-49bc-b64d-1dee82bc425a-inventory" (OuterVolumeSpecName: "inventory") pod "9ce2ee96-991c-49bc-b64d-1dee82bc425a" (UID: "9ce2ee96-991c-49bc-b64d-1dee82bc425a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:49:05 crc kubenswrapper[4796]: I1205 10:49:05.750313 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce2ee96-991c-49bc-b64d-1dee82bc425a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9ce2ee96-991c-49bc-b64d-1dee82bc425a" (UID: "9ce2ee96-991c-49bc-b64d-1dee82bc425a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:49:05 crc kubenswrapper[4796]: I1205 10:49:05.826202 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btcq4\" (UniqueName: \"kubernetes.io/projected/9ce2ee96-991c-49bc-b64d-1dee82bc425a-kube-api-access-btcq4\") on node \"crc\" DevicePath \"\"" Dec 05 10:49:05 crc kubenswrapper[4796]: I1205 10:49:05.826303 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ce2ee96-991c-49bc-b64d-1dee82bc425a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 10:49:05 crc kubenswrapper[4796]: I1205 10:49:05.826358 4796 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce2ee96-991c-49bc-b64d-1dee82bc425a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:49:05 crc kubenswrapper[4796]: I1205 10:49:05.826408 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ce2ee96-991c-49bc-b64d-1dee82bc425a-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 10:49:06 crc kubenswrapper[4796]: I1205 10:49:06.292709 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c" event={"ID":"9ce2ee96-991c-49bc-b64d-1dee82bc425a","Type":"ContainerDied","Data":"7ff553dbd8f04603bfebf14e9cfc6d7780d08178ba3deea43dfe07f2b271f568"} Dec 05 10:49:06 crc kubenswrapper[4796]: I1205 10:49:06.293049 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ff553dbd8f04603bfebf14e9cfc6d7780d08178ba3deea43dfe07f2b271f568" Dec 05 10:49:06 crc kubenswrapper[4796]: I1205 10:49:06.292772 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c" Dec 05 10:49:06 crc kubenswrapper[4796]: I1205 10:49:06.353029 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz"] Dec 05 10:49:06 crc kubenswrapper[4796]: E1205 10:49:06.353499 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce2ee96-991c-49bc-b64d-1dee82bc425a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 05 10:49:06 crc kubenswrapper[4796]: I1205 10:49:06.353519 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce2ee96-991c-49bc-b64d-1dee82bc425a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 05 10:49:06 crc kubenswrapper[4796]: I1205 10:49:06.353714 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce2ee96-991c-49bc-b64d-1dee82bc425a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 05 10:49:06 crc kubenswrapper[4796]: I1205 10:49:06.354394 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz" Dec 05 10:49:06 crc kubenswrapper[4796]: I1205 10:49:06.359061 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 10:49:06 crc kubenswrapper[4796]: I1205 10:49:06.359065 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vmswc" Dec 05 10:49:06 crc kubenswrapper[4796]: I1205 10:49:06.359297 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 10:49:06 crc kubenswrapper[4796]: I1205 10:49:06.360195 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 10:49:06 crc kubenswrapper[4796]: I1205 10:49:06.367321 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz"] Dec 05 10:49:06 crc kubenswrapper[4796]: I1205 10:49:06.539200 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1045534-e8dd-4d18-a198-d50d1af5d79b-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz\" (UID: \"a1045534-e8dd-4d18-a198-d50d1af5d79b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz" Dec 05 10:49:06 crc kubenswrapper[4796]: I1205 10:49:06.539308 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1045534-e8dd-4d18-a198-d50d1af5d79b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz\" (UID: \"a1045534-e8dd-4d18-a198-d50d1af5d79b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz" Dec 05 10:49:06 crc kubenswrapper[4796]: I1205 10:49:06.539342 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psk6j\" (UniqueName: \"kubernetes.io/projected/a1045534-e8dd-4d18-a198-d50d1af5d79b-kube-api-access-psk6j\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz\" (UID: \"a1045534-e8dd-4d18-a198-d50d1af5d79b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz" Dec 05 10:49:06 crc kubenswrapper[4796]: I1205 10:49:06.640614 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1045534-e8dd-4d18-a198-d50d1af5d79b-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz\" (UID: \"a1045534-e8dd-4d18-a198-d50d1af5d79b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz" Dec 05 10:49:06 crc kubenswrapper[4796]: I1205 10:49:06.640717 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1045534-e8dd-4d18-a198-d50d1af5d79b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz\" (UID: \"a1045534-e8dd-4d18-a198-d50d1af5d79b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz" Dec 05 10:49:06 crc kubenswrapper[4796]: I1205 10:49:06.640753 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psk6j\" (UniqueName: \"kubernetes.io/projected/a1045534-e8dd-4d18-a198-d50d1af5d79b-kube-api-access-psk6j\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz\" (UID: \"a1045534-e8dd-4d18-a198-d50d1af5d79b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz" Dec 05 10:49:06 crc kubenswrapper[4796]: I1205 10:49:06.645036 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1045534-e8dd-4d18-a198-d50d1af5d79b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz\" (UID: \"a1045534-e8dd-4d18-a198-d50d1af5d79b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz" Dec 05 10:49:06 crc kubenswrapper[4796]: I1205 10:49:06.646266 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1045534-e8dd-4d18-a198-d50d1af5d79b-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz\" (UID: \"a1045534-e8dd-4d18-a198-d50d1af5d79b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz" Dec 05 10:49:06 crc kubenswrapper[4796]: I1205 10:49:06.657012 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psk6j\" (UniqueName: \"kubernetes.io/projected/a1045534-e8dd-4d18-a198-d50d1af5d79b-kube-api-access-psk6j\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz\" (UID: \"a1045534-e8dd-4d18-a198-d50d1af5d79b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz" Dec 05 10:49:06 crc kubenswrapper[4796]: I1205 10:49:06.667902 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz" Dec 05 10:49:07 crc kubenswrapper[4796]: I1205 10:49:07.091513 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz"] Dec 05 10:49:07 crc kubenswrapper[4796]: I1205 10:49:07.094663 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 10:49:07 crc kubenswrapper[4796]: I1205 10:49:07.302947 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz" event={"ID":"a1045534-e8dd-4d18-a198-d50d1af5d79b","Type":"ContainerStarted","Data":"f8f647491581117154a710a2ac3900cbc3d50f0260fbfeebc8f79a9919c809a2"} Dec 05 10:49:08 crc kubenswrapper[4796]: I1205 10:49:08.311736 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz" event={"ID":"a1045534-e8dd-4d18-a198-d50d1af5d79b","Type":"ContainerStarted","Data":"d993f3f5b569968fb2547ceb415df5bb333272d12fbbc404b8ee42ffdfaec19d"} Dec 05 10:49:08 crc kubenswrapper[4796]: I1205 10:49:08.333761 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz" podStartSLOduration=1.8060217189999999 podStartE2EDuration="2.33373547s" podCreationTimestamp="2025-12-05 10:49:06 +0000 UTC" firstStartedPulling="2025-12-05 10:49:07.094363748 +0000 UTC m=+1293.382469261" lastFinishedPulling="2025-12-05 10:49:07.622077499 +0000 UTC m=+1293.910183012" observedRunningTime="2025-12-05 10:49:08.323004811 +0000 UTC m=+1294.611110325" watchObservedRunningTime="2025-12-05 10:49:08.33373547 +0000 UTC m=+1294.621840983" Dec 05 10:49:35 crc kubenswrapper[4796]: I1205 10:49:35.177732 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:49:35 crc kubenswrapper[4796]: I1205 10:49:35.178192 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:49:35 crc kubenswrapper[4796]: I1205 10:49:35.178234 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 10:49:35 crc kubenswrapper[4796]: I1205 10:49:35.178705 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"960ca04fdaec51a11e5104bd66b840084c3f85c581e931a71e4da412d7f92d59"} pod="openshift-machine-config-operator/machine-config-daemon-9pllw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 10:49:35 crc kubenswrapper[4796]: I1205 10:49:35.178753 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" containerID="cri-o://960ca04fdaec51a11e5104bd66b840084c3f85c581e931a71e4da412d7f92d59" gracePeriod=600 Dec 05 10:49:35 crc kubenswrapper[4796]: I1205 10:49:35.502676 4796 generic.go:334] "Generic (PLEG): container finished" podID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerID="960ca04fdaec51a11e5104bd66b840084c3f85c581e931a71e4da412d7f92d59" exitCode=0 Dec 05 10:49:35 crc kubenswrapper[4796]: I1205 10:49:35.502726 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerDied","Data":"960ca04fdaec51a11e5104bd66b840084c3f85c581e931a71e4da412d7f92d59"} Dec 05 10:49:35 crc kubenswrapper[4796]: I1205 10:49:35.502954 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerStarted","Data":"c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03"} Dec 05 10:49:35 crc kubenswrapper[4796]: I1205 10:49:35.502974 4796 scope.go:117] "RemoveContainer" containerID="9b9837e0a00511791d7723ec246d4ce28520c7946787df15f8db71f53bf3790d" Dec 05 10:50:34 crc kubenswrapper[4796]: I1205 10:50:34.909449 4796 generic.go:334] "Generic (PLEG): container finished" podID="a1045534-e8dd-4d18-a198-d50d1af5d79b" containerID="d993f3f5b569968fb2547ceb415df5bb333272d12fbbc404b8ee42ffdfaec19d" exitCode=0 Dec 05 10:50:34 crc kubenswrapper[4796]: I1205 10:50:34.909538 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz" event={"ID":"a1045534-e8dd-4d18-a198-d50d1af5d79b","Type":"ContainerDied","Data":"d993f3f5b569968fb2547ceb415df5bb333272d12fbbc404b8ee42ffdfaec19d"} Dec 05 10:50:36 crc kubenswrapper[4796]: I1205 10:50:36.234563 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz" Dec 05 10:50:36 crc kubenswrapper[4796]: I1205 10:50:36.356548 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1045534-e8dd-4d18-a198-d50d1af5d79b-inventory\") pod \"a1045534-e8dd-4d18-a198-d50d1af5d79b\" (UID: \"a1045534-e8dd-4d18-a198-d50d1af5d79b\") " Dec 05 10:50:36 crc kubenswrapper[4796]: I1205 10:50:36.356710 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1045534-e8dd-4d18-a198-d50d1af5d79b-ssh-key\") pod \"a1045534-e8dd-4d18-a198-d50d1af5d79b\" (UID: \"a1045534-e8dd-4d18-a198-d50d1af5d79b\") " Dec 05 10:50:36 crc kubenswrapper[4796]: I1205 10:50:36.356861 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psk6j\" (UniqueName: \"kubernetes.io/projected/a1045534-e8dd-4d18-a198-d50d1af5d79b-kube-api-access-psk6j\") pod \"a1045534-e8dd-4d18-a198-d50d1af5d79b\" (UID: \"a1045534-e8dd-4d18-a198-d50d1af5d79b\") " Dec 05 10:50:36 crc kubenswrapper[4796]: I1205 10:50:36.362193 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1045534-e8dd-4d18-a198-d50d1af5d79b-kube-api-access-psk6j" (OuterVolumeSpecName: "kube-api-access-psk6j") pod "a1045534-e8dd-4d18-a198-d50d1af5d79b" (UID: "a1045534-e8dd-4d18-a198-d50d1af5d79b"). InnerVolumeSpecName "kube-api-access-psk6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:50:36 crc kubenswrapper[4796]: I1205 10:50:36.380008 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1045534-e8dd-4d18-a198-d50d1af5d79b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a1045534-e8dd-4d18-a198-d50d1af5d79b" (UID: "a1045534-e8dd-4d18-a198-d50d1af5d79b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:50:36 crc kubenswrapper[4796]: I1205 10:50:36.380022 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1045534-e8dd-4d18-a198-d50d1af5d79b-inventory" (OuterVolumeSpecName: "inventory") pod "a1045534-e8dd-4d18-a198-d50d1af5d79b" (UID: "a1045534-e8dd-4d18-a198-d50d1af5d79b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:50:36 crc kubenswrapper[4796]: I1205 10:50:36.460159 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psk6j\" (UniqueName: \"kubernetes.io/projected/a1045534-e8dd-4d18-a198-d50d1af5d79b-kube-api-access-psk6j\") on node \"crc\" DevicePath \"\"" Dec 05 10:50:36 crc kubenswrapper[4796]: I1205 10:50:36.460207 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a1045534-e8dd-4d18-a198-d50d1af5d79b-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 10:50:36 crc kubenswrapper[4796]: I1205 10:50:36.460217 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a1045534-e8dd-4d18-a198-d50d1af5d79b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 10:50:36 crc kubenswrapper[4796]: I1205 10:50:36.930923 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz" event={"ID":"a1045534-e8dd-4d18-a198-d50d1af5d79b","Type":"ContainerDied","Data":"f8f647491581117154a710a2ac3900cbc3d50f0260fbfeebc8f79a9919c809a2"} Dec 05 10:50:36 crc kubenswrapper[4796]: I1205 10:50:36.931353 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8f647491581117154a710a2ac3900cbc3d50f0260fbfeebc8f79a9919c809a2" Dec 05 10:50:36 crc kubenswrapper[4796]: I1205 10:50:36.931046 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz" Dec 05 10:50:37 crc kubenswrapper[4796]: I1205 10:50:37.002779 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r"] Dec 05 10:50:37 crc kubenswrapper[4796]: E1205 10:50:37.003204 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1045534-e8dd-4d18-a198-d50d1af5d79b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 05 10:50:37 crc kubenswrapper[4796]: I1205 10:50:37.003225 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1045534-e8dd-4d18-a198-d50d1af5d79b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 05 10:50:37 crc kubenswrapper[4796]: I1205 10:50:37.003460 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1045534-e8dd-4d18-a198-d50d1af5d79b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 05 10:50:37 crc kubenswrapper[4796]: I1205 10:50:37.004099 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r" Dec 05 10:50:37 crc kubenswrapper[4796]: I1205 10:50:37.007265 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 10:50:37 crc kubenswrapper[4796]: I1205 10:50:37.007455 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vmswc" Dec 05 10:50:37 crc kubenswrapper[4796]: I1205 10:50:37.007600 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 10:50:37 crc kubenswrapper[4796]: I1205 10:50:37.008051 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 10:50:37 crc kubenswrapper[4796]: I1205 10:50:37.012773 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r"] Dec 05 10:50:37 crc kubenswrapper[4796]: I1205 10:50:37.072825 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4856a801-fa7d-4150-b557-1b1a0066ce78-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r\" (UID: \"4856a801-fa7d-4150-b557-1b1a0066ce78\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r" Dec 05 10:50:37 crc kubenswrapper[4796]: I1205 10:50:37.072932 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4856a801-fa7d-4150-b557-1b1a0066ce78-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r\" (UID: \"4856a801-fa7d-4150-b557-1b1a0066ce78\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r" Dec 05 10:50:37 crc kubenswrapper[4796]: I1205 10:50:37.073241 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qghqk\" (UniqueName: \"kubernetes.io/projected/4856a801-fa7d-4150-b557-1b1a0066ce78-kube-api-access-qghqk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r\" (UID: \"4856a801-fa7d-4150-b557-1b1a0066ce78\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r" Dec 05 10:50:37 crc kubenswrapper[4796]: I1205 10:50:37.174247 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4856a801-fa7d-4150-b557-1b1a0066ce78-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r\" (UID: \"4856a801-fa7d-4150-b557-1b1a0066ce78\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r" Dec 05 10:50:37 crc kubenswrapper[4796]: I1205 10:50:37.174345 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qghqk\" (UniqueName: \"kubernetes.io/projected/4856a801-fa7d-4150-b557-1b1a0066ce78-kube-api-access-qghqk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r\" (UID: \"4856a801-fa7d-4150-b557-1b1a0066ce78\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r" Dec 05 10:50:37 crc kubenswrapper[4796]: I1205 10:50:37.174373 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4856a801-fa7d-4150-b557-1b1a0066ce78-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r\" (UID: \"4856a801-fa7d-4150-b557-1b1a0066ce78\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r" Dec 05 10:50:37 crc kubenswrapper[4796]: I1205 10:50:37.178494 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4856a801-fa7d-4150-b557-1b1a0066ce78-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r\" (UID: \"4856a801-fa7d-4150-b557-1b1a0066ce78\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r" Dec 05 10:50:37 crc kubenswrapper[4796]: I1205 10:50:37.179341 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4856a801-fa7d-4150-b557-1b1a0066ce78-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r\" (UID: \"4856a801-fa7d-4150-b557-1b1a0066ce78\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r" Dec 05 10:50:37 crc kubenswrapper[4796]: I1205 10:50:37.191252 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qghqk\" (UniqueName: \"kubernetes.io/projected/4856a801-fa7d-4150-b557-1b1a0066ce78-kube-api-access-qghqk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r\" (UID: \"4856a801-fa7d-4150-b557-1b1a0066ce78\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r" Dec 05 10:50:37 crc kubenswrapper[4796]: I1205 10:50:37.326439 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r" Dec 05 10:50:37 crc kubenswrapper[4796]: I1205 10:50:37.803434 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r"] Dec 05 10:50:37 crc kubenswrapper[4796]: I1205 10:50:37.938522 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r" event={"ID":"4856a801-fa7d-4150-b557-1b1a0066ce78","Type":"ContainerStarted","Data":"2fea2ada1a025dfd2f1c367574e592d72e0bc69c4b285c9ac3b74a963f23b38d"} Dec 05 10:50:38 crc kubenswrapper[4796]: I1205 10:50:38.947505 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r" event={"ID":"4856a801-fa7d-4150-b557-1b1a0066ce78","Type":"ContainerStarted","Data":"43cad2d2131079d489bb6f730239a7554c0116d2326c44acdcfbdf61c742a01f"} Dec 05 10:50:38 crc kubenswrapper[4796]: I1205 10:50:38.967236 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r" podStartSLOduration=2.352417338 podStartE2EDuration="2.967217922s" podCreationTimestamp="2025-12-05 10:50:36 +0000 UTC" firstStartedPulling="2025-12-05 10:50:37.803495394 +0000 UTC m=+1384.091600907" lastFinishedPulling="2025-12-05 10:50:38.418295978 +0000 UTC m=+1384.706401491" observedRunningTime="2025-12-05 10:50:38.960821239 +0000 UTC m=+1385.248926752" watchObservedRunningTime="2025-12-05 10:50:38.967217922 +0000 UTC m=+1385.255323435" Dec 05 10:50:48 crc kubenswrapper[4796]: I1205 10:50:48.039736 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6dhkt"] Dec 05 10:50:48 crc kubenswrapper[4796]: I1205 10:50:48.040346 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-hjp4p"] Dec 05 10:50:48 crc kubenswrapper[4796]: I1205 10:50:48.049645 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-22f2t"] Dec 05 10:50:48 crc kubenswrapper[4796]: I1205 10:50:48.055227 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-22f2t"] Dec 05 10:50:48 crc kubenswrapper[4796]: I1205 10:50:48.060520 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6dhkt"] Dec 05 10:50:48 crc kubenswrapper[4796]: I1205 10:50:48.065517 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-hjp4p"] Dec 05 10:50:50 crc kubenswrapper[4796]: I1205 10:50:50.039754 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e2da584-c796-4527-b9db-0b455d037fec" path="/var/lib/kubelet/pods/1e2da584-c796-4527-b9db-0b455d037fec/volumes" Dec 05 10:50:50 crc kubenswrapper[4796]: I1205 10:50:50.040546 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a2c6f26-7d14-4cda-a3d0-901c0a9fcc80" path="/var/lib/kubelet/pods/5a2c6f26-7d14-4cda-a3d0-901c0a9fcc80/volumes" Dec 05 10:50:50 crc kubenswrapper[4796]: I1205 10:50:50.041043 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1794457-a1b4-4c7b-bc21-ba7acb558b2e" path="/var/lib/kubelet/pods/a1794457-a1b4-4c7b-bc21-ba7acb558b2e/volumes" Dec 05 10:50:58 crc kubenswrapper[4796]: I1205 10:50:58.027993 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-8296-account-create-vsqtn"] Dec 05 10:50:58 crc kubenswrapper[4796]: I1205 10:50:58.040952 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ebc7-account-create-t4zfm"] Dec 05 10:50:58 crc kubenswrapper[4796]: I1205 10:50:58.045829 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-8296-account-create-vsqtn"] Dec 05 10:50:58 crc kubenswrapper[4796]: I1205 10:50:58.052853 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6546-account-create-rs4m9"] Dec 05 10:50:58 crc kubenswrapper[4796]: I1205 10:50:58.059560 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-ebc7-account-create-t4zfm"] Dec 05 10:50:58 crc kubenswrapper[4796]: I1205 10:50:58.065322 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6546-account-create-rs4m9"] Dec 05 10:51:00 crc kubenswrapper[4796]: I1205 10:51:00.041124 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19c553eb-9b61-4fd2-8584-a8f9d862f59d" path="/var/lib/kubelet/pods/19c553eb-9b61-4fd2-8584-a8f9d862f59d/volumes" Dec 05 10:51:00 crc kubenswrapper[4796]: I1205 10:51:00.041637 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d6a8d0e-22fa-489d-8776-0bd33787e161" path="/var/lib/kubelet/pods/7d6a8d0e-22fa-489d-8776-0bd33787e161/volumes" Dec 05 10:51:00 crc kubenswrapper[4796]: I1205 10:51:00.042396 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="855342fc-53c0-408c-88e0-bfcf5f5c181c" path="/var/lib/kubelet/pods/855342fc-53c0-408c-88e0-bfcf5f5c181c/volumes" Dec 05 10:51:15 crc kubenswrapper[4796]: I1205 10:51:15.029972 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-kcxnj"] Dec 05 10:51:15 crc kubenswrapper[4796]: I1205 10:51:15.037307 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-6dghb"] Dec 05 10:51:15 crc kubenswrapper[4796]: I1205 10:51:15.042770 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-kcxnj"] Dec 05 10:51:15 crc kubenswrapper[4796]: I1205 10:51:15.047781 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-6dghb"] Dec 05 10:51:16 crc kubenswrapper[4796]: I1205 10:51:16.020329 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-8n89d"] Dec 05 10:51:16 crc kubenswrapper[4796]: I1205 10:51:16.026013 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-8n89d"] Dec 05 10:51:16 crc kubenswrapper[4796]: I1205 10:51:16.039901 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59da3269-88ea-4095-b841-cf1b27cb4274" path="/var/lib/kubelet/pods/59da3269-88ea-4095-b841-cf1b27cb4274/volumes" Dec 05 10:51:16 crc kubenswrapper[4796]: I1205 10:51:16.040503 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70929a4c-02e5-48e5-acf6-390bba65c808" path="/var/lib/kubelet/pods/70929a4c-02e5-48e5-acf6-390bba65c808/volumes" Dec 05 10:51:16 crc kubenswrapper[4796]: I1205 10:51:16.041003 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a535246b-8d73-4814-9eb0-9bc7e04e3414" path="/var/lib/kubelet/pods/a535246b-8d73-4814-9eb0-9bc7e04e3414/volumes" Dec 05 10:51:16 crc kubenswrapper[4796]: I1205 10:51:16.879817 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d6hg6"] Dec 05 10:51:16 crc kubenswrapper[4796]: I1205 10:51:16.881786 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6hg6" Dec 05 10:51:16 crc kubenswrapper[4796]: I1205 10:51:16.889897 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6hg6"] Dec 05 10:51:16 crc kubenswrapper[4796]: I1205 10:51:16.991204 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44w9g\" (UniqueName: \"kubernetes.io/projected/40fa0e71-61d5-4ebb-8a4f-d1c640b56694-kube-api-access-44w9g\") pod \"redhat-marketplace-d6hg6\" (UID: \"40fa0e71-61d5-4ebb-8a4f-d1c640b56694\") " pod="openshift-marketplace/redhat-marketplace-d6hg6" Dec 05 10:51:16 crc kubenswrapper[4796]: I1205 10:51:16.991315 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40fa0e71-61d5-4ebb-8a4f-d1c640b56694-catalog-content\") pod \"redhat-marketplace-d6hg6\" (UID: \"40fa0e71-61d5-4ebb-8a4f-d1c640b56694\") " pod="openshift-marketplace/redhat-marketplace-d6hg6" Dec 05 10:51:16 crc kubenswrapper[4796]: I1205 10:51:16.991414 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40fa0e71-61d5-4ebb-8a4f-d1c640b56694-utilities\") pod \"redhat-marketplace-d6hg6\" (UID: \"40fa0e71-61d5-4ebb-8a4f-d1c640b56694\") " pod="openshift-marketplace/redhat-marketplace-d6hg6" Dec 05 10:51:17 crc kubenswrapper[4796]: I1205 10:51:17.019122 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-fvtdm"] Dec 05 10:51:17 crc kubenswrapper[4796]: I1205 10:51:17.024710 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-fvtdm"] Dec 05 10:51:17 crc kubenswrapper[4796]: I1205 10:51:17.094392 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40fa0e71-61d5-4ebb-8a4f-d1c640b56694-utilities\") pod \"redhat-marketplace-d6hg6\" (UID: \"40fa0e71-61d5-4ebb-8a4f-d1c640b56694\") " pod="openshift-marketplace/redhat-marketplace-d6hg6" Dec 05 10:51:17 crc kubenswrapper[4796]: I1205 10:51:17.094500 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44w9g\" (UniqueName: \"kubernetes.io/projected/40fa0e71-61d5-4ebb-8a4f-d1c640b56694-kube-api-access-44w9g\") pod \"redhat-marketplace-d6hg6\" (UID: \"40fa0e71-61d5-4ebb-8a4f-d1c640b56694\") " pod="openshift-marketplace/redhat-marketplace-d6hg6" Dec 05 10:51:17 crc kubenswrapper[4796]: I1205 10:51:17.094572 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40fa0e71-61d5-4ebb-8a4f-d1c640b56694-catalog-content\") pod \"redhat-marketplace-d6hg6\" (UID: \"40fa0e71-61d5-4ebb-8a4f-d1c640b56694\") " pod="openshift-marketplace/redhat-marketplace-d6hg6" Dec 05 10:51:17 crc kubenswrapper[4796]: I1205 10:51:17.094862 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40fa0e71-61d5-4ebb-8a4f-d1c640b56694-utilities\") pod \"redhat-marketplace-d6hg6\" (UID: \"40fa0e71-61d5-4ebb-8a4f-d1c640b56694\") " pod="openshift-marketplace/redhat-marketplace-d6hg6" Dec 05 10:51:17 crc kubenswrapper[4796]: I1205 10:51:17.094982 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40fa0e71-61d5-4ebb-8a4f-d1c640b56694-catalog-content\") pod \"redhat-marketplace-d6hg6\" (UID: \"40fa0e71-61d5-4ebb-8a4f-d1c640b56694\") " pod="openshift-marketplace/redhat-marketplace-d6hg6" Dec 05 10:51:17 crc kubenswrapper[4796]: I1205 10:51:17.114396 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44w9g\" (UniqueName: \"kubernetes.io/projected/40fa0e71-61d5-4ebb-8a4f-d1c640b56694-kube-api-access-44w9g\") pod \"redhat-marketplace-d6hg6\" (UID: \"40fa0e71-61d5-4ebb-8a4f-d1c640b56694\") " pod="openshift-marketplace/redhat-marketplace-d6hg6" Dec 05 10:51:17 crc kubenswrapper[4796]: I1205 10:51:17.200826 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6hg6" Dec 05 10:51:17 crc kubenswrapper[4796]: I1205 10:51:17.615779 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6hg6"] Dec 05 10:51:18 crc kubenswrapper[4796]: I1205 10:51:18.038229 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1a87b8d-81a5-468f-9264-a5896daa5960" path="/var/lib/kubelet/pods/e1a87b8d-81a5-468f-9264-a5896daa5960/volumes" Dec 05 10:51:18 crc kubenswrapper[4796]: I1205 10:51:18.247795 4796 generic.go:334] "Generic (PLEG): container finished" podID="40fa0e71-61d5-4ebb-8a4f-d1c640b56694" containerID="079d421ce4319b4b39285c55d55bc83a12409445f8cc674ef8215a52d4aac5b6" exitCode=0 Dec 05 10:51:18 crc kubenswrapper[4796]: I1205 10:51:18.248395 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6hg6" event={"ID":"40fa0e71-61d5-4ebb-8a4f-d1c640b56694","Type":"ContainerDied","Data":"079d421ce4319b4b39285c55d55bc83a12409445f8cc674ef8215a52d4aac5b6"} Dec 05 10:51:18 crc kubenswrapper[4796]: I1205 10:51:18.248442 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6hg6" event={"ID":"40fa0e71-61d5-4ebb-8a4f-d1c640b56694","Type":"ContainerStarted","Data":"ad4e1f317542ab39ea31acb1b983eae772ceb845053c68267f432730249e856c"} Dec 05 10:51:19 crc kubenswrapper[4796]: I1205 10:51:19.019009 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-hqvft"] Dec 05 10:51:19 crc kubenswrapper[4796]: I1205 10:51:19.024440 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-hqvft"] Dec 05 10:51:19 crc kubenswrapper[4796]: I1205 10:51:19.256382 4796 generic.go:334] "Generic (PLEG): container finished" podID="40fa0e71-61d5-4ebb-8a4f-d1c640b56694" containerID="04b185ae682de0ad9468aae05bc35e87a3bd444745a14cc1d9a828cc58836b92" exitCode=0 Dec 05 10:51:19 crc kubenswrapper[4796]: I1205 10:51:19.256422 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6hg6" event={"ID":"40fa0e71-61d5-4ebb-8a4f-d1c640b56694","Type":"ContainerDied","Data":"04b185ae682de0ad9468aae05bc35e87a3bd444745a14cc1d9a828cc58836b92"} Dec 05 10:51:20 crc kubenswrapper[4796]: I1205 10:51:20.038500 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdcb30d0-0871-4171-8406-92f4864feac1" path="/var/lib/kubelet/pods/bdcb30d0-0871-4171-8406-92f4864feac1/volumes" Dec 05 10:51:20 crc kubenswrapper[4796]: I1205 10:51:20.264451 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6hg6" event={"ID":"40fa0e71-61d5-4ebb-8a4f-d1c640b56694","Type":"ContainerStarted","Data":"8477162085a85fe7be0aa2d03c6091ba45fb567b4c2a9a70e0d1121357779863"} Dec 05 10:51:20 crc kubenswrapper[4796]: I1205 10:51:20.280211 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d6hg6" podStartSLOduration=2.736647218 podStartE2EDuration="4.280197202s" podCreationTimestamp="2025-12-05 10:51:16 +0000 UTC" firstStartedPulling="2025-12-05 10:51:18.249311209 +0000 UTC m=+1424.537416721" lastFinishedPulling="2025-12-05 10:51:19.792861193 +0000 UTC m=+1426.080966705" observedRunningTime="2025-12-05 10:51:20.276329176 +0000 UTC m=+1426.564434689" watchObservedRunningTime="2025-12-05 10:51:20.280197202 +0000 UTC m=+1426.568302715" Dec 05 10:51:23 crc kubenswrapper[4796]: I1205 10:51:23.267541 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jgqq2"] Dec 05 10:51:23 crc kubenswrapper[4796]: I1205 10:51:23.269574 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgqq2" Dec 05 10:51:23 crc kubenswrapper[4796]: I1205 10:51:23.277802 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jgqq2"] Dec 05 10:51:23 crc kubenswrapper[4796]: I1205 10:51:23.391154 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec00d68e-dea1-4cf0-b162-033f72036514-catalog-content\") pod \"community-operators-jgqq2\" (UID: \"ec00d68e-dea1-4cf0-b162-033f72036514\") " pod="openshift-marketplace/community-operators-jgqq2" Dec 05 10:51:23 crc kubenswrapper[4796]: I1205 10:51:23.391289 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec00d68e-dea1-4cf0-b162-033f72036514-utilities\") pod \"community-operators-jgqq2\" (UID: \"ec00d68e-dea1-4cf0-b162-033f72036514\") " pod="openshift-marketplace/community-operators-jgqq2" Dec 05 10:51:23 crc kubenswrapper[4796]: I1205 10:51:23.391312 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6bkt\" (UniqueName: \"kubernetes.io/projected/ec00d68e-dea1-4cf0-b162-033f72036514-kube-api-access-w6bkt\") pod \"community-operators-jgqq2\" (UID: \"ec00d68e-dea1-4cf0-b162-033f72036514\") " pod="openshift-marketplace/community-operators-jgqq2" Dec 05 10:51:23 crc kubenswrapper[4796]: I1205 10:51:23.492988 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec00d68e-dea1-4cf0-b162-033f72036514-catalog-content\") pod \"community-operators-jgqq2\" (UID: \"ec00d68e-dea1-4cf0-b162-033f72036514\") " pod="openshift-marketplace/community-operators-jgqq2" Dec 05 10:51:23 crc kubenswrapper[4796]: I1205 10:51:23.493127 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec00d68e-dea1-4cf0-b162-033f72036514-utilities\") pod \"community-operators-jgqq2\" (UID: \"ec00d68e-dea1-4cf0-b162-033f72036514\") " pod="openshift-marketplace/community-operators-jgqq2" Dec 05 10:51:23 crc kubenswrapper[4796]: I1205 10:51:23.493145 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6bkt\" (UniqueName: \"kubernetes.io/projected/ec00d68e-dea1-4cf0-b162-033f72036514-kube-api-access-w6bkt\") pod \"community-operators-jgqq2\" (UID: \"ec00d68e-dea1-4cf0-b162-033f72036514\") " pod="openshift-marketplace/community-operators-jgqq2" Dec 05 10:51:23 crc kubenswrapper[4796]: I1205 10:51:23.493740 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec00d68e-dea1-4cf0-b162-033f72036514-catalog-content\") pod \"community-operators-jgqq2\" (UID: \"ec00d68e-dea1-4cf0-b162-033f72036514\") " pod="openshift-marketplace/community-operators-jgqq2" Dec 05 10:51:23 crc kubenswrapper[4796]: I1205 10:51:23.493803 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec00d68e-dea1-4cf0-b162-033f72036514-utilities\") pod \"community-operators-jgqq2\" (UID: \"ec00d68e-dea1-4cf0-b162-033f72036514\") " pod="openshift-marketplace/community-operators-jgqq2" Dec 05 10:51:23 crc kubenswrapper[4796]: I1205 10:51:23.509241 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6bkt\" (UniqueName: \"kubernetes.io/projected/ec00d68e-dea1-4cf0-b162-033f72036514-kube-api-access-w6bkt\") pod \"community-operators-jgqq2\" (UID: \"ec00d68e-dea1-4cf0-b162-033f72036514\") " pod="openshift-marketplace/community-operators-jgqq2" Dec 05 10:51:23 crc kubenswrapper[4796]: I1205 10:51:23.591527 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgqq2" Dec 05 10:51:23 crc kubenswrapper[4796]: I1205 10:51:23.964653 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jgqq2"] Dec 05 10:51:23 crc kubenswrapper[4796]: W1205 10:51:23.968791 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec00d68e_dea1_4cf0_b162_033f72036514.slice/crio-691cb7b2d1bdd0b0f86cd9f5a319b4cb7024170299bb0413b07f3526c6725dbc WatchSource:0}: Error finding container 691cb7b2d1bdd0b0f86cd9f5a319b4cb7024170299bb0413b07f3526c6725dbc: Status 404 returned error can't find the container with id 691cb7b2d1bdd0b0f86cd9f5a319b4cb7024170299bb0413b07f3526c6725dbc Dec 05 10:51:24 crc kubenswrapper[4796]: I1205 10:51:24.297674 4796 generic.go:334] "Generic (PLEG): container finished" podID="ec00d68e-dea1-4cf0-b162-033f72036514" containerID="4a87e88be608cc37455b5ccff4a5d5b2f06c6297112225b110a6a43e6fd0aee2" exitCode=0 Dec 05 10:51:24 crc kubenswrapper[4796]: I1205 10:51:24.297716 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgqq2" event={"ID":"ec00d68e-dea1-4cf0-b162-033f72036514","Type":"ContainerDied","Data":"4a87e88be608cc37455b5ccff4a5d5b2f06c6297112225b110a6a43e6fd0aee2"} Dec 05 10:51:24 crc kubenswrapper[4796]: I1205 10:51:24.297950 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgqq2" event={"ID":"ec00d68e-dea1-4cf0-b162-033f72036514","Type":"ContainerStarted","Data":"691cb7b2d1bdd0b0f86cd9f5a319b4cb7024170299bb0413b07f3526c6725dbc"} Dec 05 10:51:25 crc kubenswrapper[4796]: I1205 10:51:25.306320 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgqq2" event={"ID":"ec00d68e-dea1-4cf0-b162-033f72036514","Type":"ContainerStarted","Data":"34a0c6837ef4a320e55211eb22b67955ffe8416b0da2a06d083ed30609763df7"} Dec 05 10:51:26 crc kubenswrapper[4796]: I1205 10:51:26.313856 4796 generic.go:334] "Generic (PLEG): container finished" podID="ec00d68e-dea1-4cf0-b162-033f72036514" containerID="34a0c6837ef4a320e55211eb22b67955ffe8416b0da2a06d083ed30609763df7" exitCode=0 Dec 05 10:51:26 crc kubenswrapper[4796]: I1205 10:51:26.313932 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgqq2" event={"ID":"ec00d68e-dea1-4cf0-b162-033f72036514","Type":"ContainerDied","Data":"34a0c6837ef4a320e55211eb22b67955ffe8416b0da2a06d083ed30609763df7"} Dec 05 10:51:27 crc kubenswrapper[4796]: I1205 10:51:27.020427 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-e210-account-create-qkfmm"] Dec 05 10:51:27 crc kubenswrapper[4796]: I1205 10:51:27.026743 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6b1d-account-create-h2cct"] Dec 05 10:51:27 crc kubenswrapper[4796]: I1205 10:51:27.033020 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-e210-account-create-qkfmm"] Dec 05 10:51:27 crc kubenswrapper[4796]: I1205 10:51:27.038584 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6b1d-account-create-h2cct"] Dec 05 10:51:27 crc kubenswrapper[4796]: I1205 10:51:27.201049 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d6hg6" Dec 05 10:51:27 crc kubenswrapper[4796]: I1205 10:51:27.201311 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d6hg6" Dec 05 10:51:27 crc kubenswrapper[4796]: I1205 10:51:27.233097 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d6hg6" Dec 05 10:51:27 crc kubenswrapper[4796]: I1205 10:51:27.322542 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgqq2" event={"ID":"ec00d68e-dea1-4cf0-b162-033f72036514","Type":"ContainerStarted","Data":"62244e8150e89399d4dfbff2cee993cde5488aa90ea8ce9fdee868732171dbdd"} Dec 05 10:51:27 crc kubenswrapper[4796]: I1205 10:51:27.335673 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jgqq2" podStartSLOduration=1.808730162 podStartE2EDuration="4.335661047s" podCreationTimestamp="2025-12-05 10:51:23 +0000 UTC" firstStartedPulling="2025-12-05 10:51:24.29889665 +0000 UTC m=+1430.587002163" lastFinishedPulling="2025-12-05 10:51:26.825827535 +0000 UTC m=+1433.113933048" observedRunningTime="2025-12-05 10:51:27.333484212 +0000 UTC m=+1433.621589725" watchObservedRunningTime="2025-12-05 10:51:27.335661047 +0000 UTC m=+1433.623766560" Dec 05 10:51:27 crc kubenswrapper[4796]: I1205 10:51:27.357474 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d6hg6" Dec 05 10:51:28 crc kubenswrapper[4796]: I1205 10:51:28.039671 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad5ee03d-3cd9-4633-bdf2-00942ae22258" path="/var/lib/kubelet/pods/ad5ee03d-3cd9-4633-bdf2-00942ae22258/volumes" Dec 05 10:51:28 crc kubenswrapper[4796]: I1205 10:51:28.040191 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddc03506-8f42-469e-8315-b0bfe3b4c2be" path="/var/lib/kubelet/pods/ddc03506-8f42-469e-8315-b0bfe3b4c2be/volumes" Dec 05 10:51:29 crc kubenswrapper[4796]: I1205 10:51:29.462270 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6hg6"] Dec 05 10:51:29 crc kubenswrapper[4796]: I1205 10:51:29.462675 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-d6hg6" podUID="40fa0e71-61d5-4ebb-8a4f-d1c640b56694" containerName="registry-server" containerID="cri-o://8477162085a85fe7be0aa2d03c6091ba45fb567b4c2a9a70e0d1121357779863" gracePeriod=2 Dec 05 10:51:29 crc kubenswrapper[4796]: I1205 10:51:29.807893 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6hg6" Dec 05 10:51:29 crc kubenswrapper[4796]: I1205 10:51:29.991407 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40fa0e71-61d5-4ebb-8a4f-d1c640b56694-utilities\") pod \"40fa0e71-61d5-4ebb-8a4f-d1c640b56694\" (UID: \"40fa0e71-61d5-4ebb-8a4f-d1c640b56694\") " Dec 05 10:51:29 crc kubenswrapper[4796]: I1205 10:51:29.991524 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40fa0e71-61d5-4ebb-8a4f-d1c640b56694-catalog-content\") pod \"40fa0e71-61d5-4ebb-8a4f-d1c640b56694\" (UID: \"40fa0e71-61d5-4ebb-8a4f-d1c640b56694\") " Dec 05 10:51:29 crc kubenswrapper[4796]: I1205 10:51:29.991743 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44w9g\" (UniqueName: \"kubernetes.io/projected/40fa0e71-61d5-4ebb-8a4f-d1c640b56694-kube-api-access-44w9g\") pod \"40fa0e71-61d5-4ebb-8a4f-d1c640b56694\" (UID: \"40fa0e71-61d5-4ebb-8a4f-d1c640b56694\") " Dec 05 10:51:29 crc kubenswrapper[4796]: I1205 10:51:29.991992 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40fa0e71-61d5-4ebb-8a4f-d1c640b56694-utilities" (OuterVolumeSpecName: "utilities") pod "40fa0e71-61d5-4ebb-8a4f-d1c640b56694" (UID: "40fa0e71-61d5-4ebb-8a4f-d1c640b56694"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:51:29 crc kubenswrapper[4796]: I1205 10:51:29.992187 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40fa0e71-61d5-4ebb-8a4f-d1c640b56694-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 10:51:29 crc kubenswrapper[4796]: I1205 10:51:29.996574 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40fa0e71-61d5-4ebb-8a4f-d1c640b56694-kube-api-access-44w9g" (OuterVolumeSpecName: "kube-api-access-44w9g") pod "40fa0e71-61d5-4ebb-8a4f-d1c640b56694" (UID: "40fa0e71-61d5-4ebb-8a4f-d1c640b56694"). InnerVolumeSpecName "kube-api-access-44w9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:51:30 crc kubenswrapper[4796]: I1205 10:51:30.005101 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40fa0e71-61d5-4ebb-8a4f-d1c640b56694-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40fa0e71-61d5-4ebb-8a4f-d1c640b56694" (UID: "40fa0e71-61d5-4ebb-8a4f-d1c640b56694"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:51:30 crc kubenswrapper[4796]: I1205 10:51:30.093795 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44w9g\" (UniqueName: \"kubernetes.io/projected/40fa0e71-61d5-4ebb-8a4f-d1c640b56694-kube-api-access-44w9g\") on node \"crc\" DevicePath \"\"" Dec 05 10:51:30 crc kubenswrapper[4796]: I1205 10:51:30.093820 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40fa0e71-61d5-4ebb-8a4f-d1c640b56694-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 10:51:30 crc kubenswrapper[4796]: I1205 10:51:30.352766 4796 generic.go:334] "Generic (PLEG): container finished" podID="40fa0e71-61d5-4ebb-8a4f-d1c640b56694" containerID="8477162085a85fe7be0aa2d03c6091ba45fb567b4c2a9a70e0d1121357779863" exitCode=0 Dec 05 10:51:30 crc kubenswrapper[4796]: I1205 10:51:30.352814 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6hg6" event={"ID":"40fa0e71-61d5-4ebb-8a4f-d1c640b56694","Type":"ContainerDied","Data":"8477162085a85fe7be0aa2d03c6091ba45fb567b4c2a9a70e0d1121357779863"} Dec 05 10:51:30 crc kubenswrapper[4796]: I1205 10:51:30.353052 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d6hg6" event={"ID":"40fa0e71-61d5-4ebb-8a4f-d1c640b56694","Type":"ContainerDied","Data":"ad4e1f317542ab39ea31acb1b983eae772ceb845053c68267f432730249e856c"} Dec 05 10:51:30 crc kubenswrapper[4796]: I1205 10:51:30.353073 4796 scope.go:117] "RemoveContainer" containerID="8477162085a85fe7be0aa2d03c6091ba45fb567b4c2a9a70e0d1121357779863" Dec 05 10:51:30 crc kubenswrapper[4796]: I1205 10:51:30.352856 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d6hg6" Dec 05 10:51:30 crc kubenswrapper[4796]: I1205 10:51:30.371307 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6hg6"] Dec 05 10:51:30 crc kubenswrapper[4796]: I1205 10:51:30.371960 4796 scope.go:117] "RemoveContainer" containerID="04b185ae682de0ad9468aae05bc35e87a3bd444745a14cc1d9a828cc58836b92" Dec 05 10:51:30 crc kubenswrapper[4796]: I1205 10:51:30.377425 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-d6hg6"] Dec 05 10:51:30 crc kubenswrapper[4796]: I1205 10:51:30.387403 4796 scope.go:117] "RemoveContainer" containerID="079d421ce4319b4b39285c55d55bc83a12409445f8cc674ef8215a52d4aac5b6" Dec 05 10:51:30 crc kubenswrapper[4796]: I1205 10:51:30.418744 4796 scope.go:117] "RemoveContainer" containerID="8477162085a85fe7be0aa2d03c6091ba45fb567b4c2a9a70e0d1121357779863" Dec 05 10:51:30 crc kubenswrapper[4796]: E1205 10:51:30.419063 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8477162085a85fe7be0aa2d03c6091ba45fb567b4c2a9a70e0d1121357779863\": container with ID starting with 8477162085a85fe7be0aa2d03c6091ba45fb567b4c2a9a70e0d1121357779863 not found: ID does not exist" containerID="8477162085a85fe7be0aa2d03c6091ba45fb567b4c2a9a70e0d1121357779863" Dec 05 10:51:30 crc kubenswrapper[4796]: I1205 10:51:30.419092 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8477162085a85fe7be0aa2d03c6091ba45fb567b4c2a9a70e0d1121357779863"} err="failed to get container status \"8477162085a85fe7be0aa2d03c6091ba45fb567b4c2a9a70e0d1121357779863\": rpc error: code = NotFound desc = could not find container \"8477162085a85fe7be0aa2d03c6091ba45fb567b4c2a9a70e0d1121357779863\": container with ID starting with 8477162085a85fe7be0aa2d03c6091ba45fb567b4c2a9a70e0d1121357779863 not found: ID does not exist" Dec 05 10:51:30 crc kubenswrapper[4796]: I1205 10:51:30.419115 4796 scope.go:117] "RemoveContainer" containerID="04b185ae682de0ad9468aae05bc35e87a3bd444745a14cc1d9a828cc58836b92" Dec 05 10:51:30 crc kubenswrapper[4796]: E1205 10:51:30.419446 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04b185ae682de0ad9468aae05bc35e87a3bd444745a14cc1d9a828cc58836b92\": container with ID starting with 04b185ae682de0ad9468aae05bc35e87a3bd444745a14cc1d9a828cc58836b92 not found: ID does not exist" containerID="04b185ae682de0ad9468aae05bc35e87a3bd444745a14cc1d9a828cc58836b92" Dec 05 10:51:30 crc kubenswrapper[4796]: I1205 10:51:30.419464 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04b185ae682de0ad9468aae05bc35e87a3bd444745a14cc1d9a828cc58836b92"} err="failed to get container status \"04b185ae682de0ad9468aae05bc35e87a3bd444745a14cc1d9a828cc58836b92\": rpc error: code = NotFound desc = could not find container \"04b185ae682de0ad9468aae05bc35e87a3bd444745a14cc1d9a828cc58836b92\": container with ID starting with 04b185ae682de0ad9468aae05bc35e87a3bd444745a14cc1d9a828cc58836b92 not found: ID does not exist" Dec 05 10:51:30 crc kubenswrapper[4796]: I1205 10:51:30.419477 4796 scope.go:117] "RemoveContainer" containerID="079d421ce4319b4b39285c55d55bc83a12409445f8cc674ef8215a52d4aac5b6" Dec 05 10:51:30 crc kubenswrapper[4796]: E1205 10:51:30.419709 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"079d421ce4319b4b39285c55d55bc83a12409445f8cc674ef8215a52d4aac5b6\": container with ID starting with 079d421ce4319b4b39285c55d55bc83a12409445f8cc674ef8215a52d4aac5b6 not found: ID does not exist" containerID="079d421ce4319b4b39285c55d55bc83a12409445f8cc674ef8215a52d4aac5b6" Dec 05 10:51:30 crc kubenswrapper[4796]: I1205 10:51:30.419729 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"079d421ce4319b4b39285c55d55bc83a12409445f8cc674ef8215a52d4aac5b6"} err="failed to get container status \"079d421ce4319b4b39285c55d55bc83a12409445f8cc674ef8215a52d4aac5b6\": rpc error: code = NotFound desc = could not find container \"079d421ce4319b4b39285c55d55bc83a12409445f8cc674ef8215a52d4aac5b6\": container with ID starting with 079d421ce4319b4b39285c55d55bc83a12409445f8cc674ef8215a52d4aac5b6 not found: ID does not exist" Dec 05 10:51:32 crc kubenswrapper[4796]: I1205 10:51:32.038835 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40fa0e71-61d5-4ebb-8a4f-d1c640b56694" path="/var/lib/kubelet/pods/40fa0e71-61d5-4ebb-8a4f-d1c640b56694/volumes" Dec 05 10:51:33 crc kubenswrapper[4796]: I1205 10:51:33.591757 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jgqq2" Dec 05 10:51:33 crc kubenswrapper[4796]: I1205 10:51:33.591800 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jgqq2" Dec 05 10:51:33 crc kubenswrapper[4796]: I1205 10:51:33.624517 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jgqq2" Dec 05 10:51:34 crc kubenswrapper[4796]: I1205 10:51:34.421363 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jgqq2" Dec 05 10:51:34 crc kubenswrapper[4796]: I1205 10:51:34.667225 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jgqq2"] Dec 05 10:51:35 crc kubenswrapper[4796]: I1205 10:51:35.177969 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:51:35 crc kubenswrapper[4796]: I1205 10:51:35.178488 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:51:35 crc kubenswrapper[4796]: I1205 10:51:35.391540 4796 generic.go:334] "Generic (PLEG): container finished" podID="4856a801-fa7d-4150-b557-1b1a0066ce78" containerID="43cad2d2131079d489bb6f730239a7554c0116d2326c44acdcfbdf61c742a01f" exitCode=0 Dec 05 10:51:35 crc kubenswrapper[4796]: I1205 10:51:35.391620 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r" event={"ID":"4856a801-fa7d-4150-b557-1b1a0066ce78","Type":"ContainerDied","Data":"43cad2d2131079d489bb6f730239a7554c0116d2326c44acdcfbdf61c742a01f"} Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.174526 4796 scope.go:117] "RemoveContainer" containerID="62dc96a793449894118ca39b5b880f3b8596f9fded97738e8edf130d36a2e390" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.192099 4796 scope.go:117] "RemoveContainer" containerID="27e25534cf6a261756afc8038a472398d7e1b4d32c56d4f45a91ec63e62e1d55" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.228736 4796 scope.go:117] "RemoveContainer" containerID="602ed37b7872fb46a6996860cd73bab4216149f10cbfc00441fb8a9f34fc1bf8" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.255971 4796 scope.go:117] "RemoveContainer" containerID="5a7496a5607e1453f37a1884bab360790721361124a9ae8930fa77a0d6fa6e2f" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.286427 4796 scope.go:117] "RemoveContainer" containerID="13874658bc680a6f0ba387c012debd3208f9c94e630ac766695beeb4de48117e" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.316300 4796 scope.go:117] "RemoveContainer" containerID="4817e7c6935750d26073e0450d91fdceea5fcda49400bec5017a029b2b047ae9" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.356436 4796 scope.go:117] "RemoveContainer" containerID="adf52932c809ee3e9a63bb672c5648e8a16e65bea39a62441c2a0762117988d6" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.372214 4796 scope.go:117] "RemoveContainer" containerID="d7bc89b859b2c18228ac1400eb6f5024d0eea0390bb2b102bfbef3bdd2418683" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.386738 4796 scope.go:117] "RemoveContainer" containerID="007acfe157f50bf1f983628a254e4792c914b5d9977f8e5a1583309e7ef53f05" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.403045 4796 scope.go:117] "RemoveContainer" containerID="7843a26578240072886edee3fcdf9b6fcf4e996e70b550db3cd4dc1f18eb12d7" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.406122 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jgqq2" podUID="ec00d68e-dea1-4cf0-b162-033f72036514" containerName="registry-server" containerID="cri-o://62244e8150e89399d4dfbff2cee993cde5488aa90ea8ce9fdee868732171dbdd" gracePeriod=2 Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.423446 4796 scope.go:117] "RemoveContainer" containerID="b98a37052e555bf078066c2b1848cd39f0f9b716503f10f7a9948aceaf3f4acd" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.496565 4796 scope.go:117] "RemoveContainer" containerID="6713b35bf82ae898e3121e1871ea97afbef3b1ec6032f0dafda895af54e87dee" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.514860 4796 scope.go:117] "RemoveContainer" containerID="2b4644ac70c8362629a40b98893046e4e8ec6d28e082f9d683a251cc594a5213" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.640045 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.709041 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4856a801-fa7d-4150-b557-1b1a0066ce78-ssh-key\") pod \"4856a801-fa7d-4150-b557-1b1a0066ce78\" (UID: \"4856a801-fa7d-4150-b557-1b1a0066ce78\") " Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.709475 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qghqk\" (UniqueName: \"kubernetes.io/projected/4856a801-fa7d-4150-b557-1b1a0066ce78-kube-api-access-qghqk\") pod \"4856a801-fa7d-4150-b557-1b1a0066ce78\" (UID: \"4856a801-fa7d-4150-b557-1b1a0066ce78\") " Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.709520 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4856a801-fa7d-4150-b557-1b1a0066ce78-inventory\") pod \"4856a801-fa7d-4150-b557-1b1a0066ce78\" (UID: \"4856a801-fa7d-4150-b557-1b1a0066ce78\") " Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.714466 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4856a801-fa7d-4150-b557-1b1a0066ce78-kube-api-access-qghqk" (OuterVolumeSpecName: "kube-api-access-qghqk") pod "4856a801-fa7d-4150-b557-1b1a0066ce78" (UID: "4856a801-fa7d-4150-b557-1b1a0066ce78"). InnerVolumeSpecName "kube-api-access-qghqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.722427 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgqq2" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.731414 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4856a801-fa7d-4150-b557-1b1a0066ce78-inventory" (OuterVolumeSpecName: "inventory") pod "4856a801-fa7d-4150-b557-1b1a0066ce78" (UID: "4856a801-fa7d-4150-b557-1b1a0066ce78"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.740212 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4856a801-fa7d-4150-b557-1b1a0066ce78-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4856a801-fa7d-4150-b557-1b1a0066ce78" (UID: "4856a801-fa7d-4150-b557-1b1a0066ce78"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.810933 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec00d68e-dea1-4cf0-b162-033f72036514-utilities\") pod \"ec00d68e-dea1-4cf0-b162-033f72036514\" (UID: \"ec00d68e-dea1-4cf0-b162-033f72036514\") " Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.811034 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec00d68e-dea1-4cf0-b162-033f72036514-catalog-content\") pod \"ec00d68e-dea1-4cf0-b162-033f72036514\" (UID: \"ec00d68e-dea1-4cf0-b162-033f72036514\") " Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.811185 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6bkt\" (UniqueName: \"kubernetes.io/projected/ec00d68e-dea1-4cf0-b162-033f72036514-kube-api-access-w6bkt\") pod \"ec00d68e-dea1-4cf0-b162-033f72036514\" (UID: \"ec00d68e-dea1-4cf0-b162-033f72036514\") " Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.811531 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qghqk\" (UniqueName: \"kubernetes.io/projected/4856a801-fa7d-4150-b557-1b1a0066ce78-kube-api-access-qghqk\") on node \"crc\" DevicePath \"\"" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.811549 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4856a801-fa7d-4150-b557-1b1a0066ce78-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.811558 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4856a801-fa7d-4150-b557-1b1a0066ce78-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.811592 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec00d68e-dea1-4cf0-b162-033f72036514-utilities" (OuterVolumeSpecName: "utilities") pod "ec00d68e-dea1-4cf0-b162-033f72036514" (UID: "ec00d68e-dea1-4cf0-b162-033f72036514"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.813750 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec00d68e-dea1-4cf0-b162-033f72036514-kube-api-access-w6bkt" (OuterVolumeSpecName: "kube-api-access-w6bkt") pod "ec00d68e-dea1-4cf0-b162-033f72036514" (UID: "ec00d68e-dea1-4cf0-b162-033f72036514"). InnerVolumeSpecName "kube-api-access-w6bkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.850186 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec00d68e-dea1-4cf0-b162-033f72036514-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec00d68e-dea1-4cf0-b162-033f72036514" (UID: "ec00d68e-dea1-4cf0-b162-033f72036514"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.913049 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec00d68e-dea1-4cf0-b162-033f72036514-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.913083 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6bkt\" (UniqueName: \"kubernetes.io/projected/ec00d68e-dea1-4cf0-b162-033f72036514-kube-api-access-w6bkt\") on node \"crc\" DevicePath \"\"" Dec 05 10:51:36 crc kubenswrapper[4796]: I1205 10:51:36.913097 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec00d68e-dea1-4cf0-b162-033f72036514-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.413800 4796 generic.go:334] "Generic (PLEG): container finished" podID="ec00d68e-dea1-4cf0-b162-033f72036514" containerID="62244e8150e89399d4dfbff2cee993cde5488aa90ea8ce9fdee868732171dbdd" exitCode=0 Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.413856 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jgqq2" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.413874 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgqq2" event={"ID":"ec00d68e-dea1-4cf0-b162-033f72036514","Type":"ContainerDied","Data":"62244e8150e89399d4dfbff2cee993cde5488aa90ea8ce9fdee868732171dbdd"} Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.413902 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jgqq2" event={"ID":"ec00d68e-dea1-4cf0-b162-033f72036514","Type":"ContainerDied","Data":"691cb7b2d1bdd0b0f86cd9f5a319b4cb7024170299bb0413b07f3526c6725dbc"} Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.413954 4796 scope.go:117] "RemoveContainer" containerID="62244e8150e89399d4dfbff2cee993cde5488aa90ea8ce9fdee868732171dbdd" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.415905 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r" event={"ID":"4856a801-fa7d-4150-b557-1b1a0066ce78","Type":"ContainerDied","Data":"2fea2ada1a025dfd2f1c367574e592d72e0bc69c4b285c9ac3b74a963f23b38d"} Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.415947 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fea2ada1a025dfd2f1c367574e592d72e0bc69c4b285c9ac3b74a963f23b38d" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.416014 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.443842 4796 scope.go:117] "RemoveContainer" containerID="34a0c6837ef4a320e55211eb22b67955ffe8416b0da2a06d083ed30609763df7" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.449607 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jgqq2"] Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.456807 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jgqq2"] Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.472666 4796 scope.go:117] "RemoveContainer" containerID="4a87e88be608cc37455b5ccff4a5d5b2f06c6297112225b110a6a43e6fd0aee2" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.486250 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6"] Dec 05 10:51:37 crc kubenswrapper[4796]: E1205 10:51:37.486638 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4856a801-fa7d-4150-b557-1b1a0066ce78" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.486659 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="4856a801-fa7d-4150-b557-1b1a0066ce78" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 05 10:51:37 crc kubenswrapper[4796]: E1205 10:51:37.486693 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40fa0e71-61d5-4ebb-8a4f-d1c640b56694" containerName="extract-content" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.486702 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="40fa0e71-61d5-4ebb-8a4f-d1c640b56694" containerName="extract-content" Dec 05 10:51:37 crc kubenswrapper[4796]: E1205 10:51:37.486713 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40fa0e71-61d5-4ebb-8a4f-d1c640b56694" containerName="extract-utilities" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.486721 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="40fa0e71-61d5-4ebb-8a4f-d1c640b56694" containerName="extract-utilities" Dec 05 10:51:37 crc kubenswrapper[4796]: E1205 10:51:37.486734 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec00d68e-dea1-4cf0-b162-033f72036514" containerName="extract-content" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.486740 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec00d68e-dea1-4cf0-b162-033f72036514" containerName="extract-content" Dec 05 10:51:37 crc kubenswrapper[4796]: E1205 10:51:37.486757 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec00d68e-dea1-4cf0-b162-033f72036514" containerName="registry-server" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.486763 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec00d68e-dea1-4cf0-b162-033f72036514" containerName="registry-server" Dec 05 10:51:37 crc kubenswrapper[4796]: E1205 10:51:37.486779 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec00d68e-dea1-4cf0-b162-033f72036514" containerName="extract-utilities" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.486785 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec00d68e-dea1-4cf0-b162-033f72036514" containerName="extract-utilities" Dec 05 10:51:37 crc kubenswrapper[4796]: E1205 10:51:37.486797 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40fa0e71-61d5-4ebb-8a4f-d1c640b56694" containerName="registry-server" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.486802 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="40fa0e71-61d5-4ebb-8a4f-d1c640b56694" containerName="registry-server" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.486965 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="4856a801-fa7d-4150-b557-1b1a0066ce78" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.486975 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="40fa0e71-61d5-4ebb-8a4f-d1c640b56694" containerName="registry-server" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.486998 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec00d68e-dea1-4cf0-b162-033f72036514" containerName="registry-server" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.496252 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6"] Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.496615 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.499281 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.500382 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.500605 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vmswc" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.501115 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.506777 4796 scope.go:117] "RemoveContainer" containerID="62244e8150e89399d4dfbff2cee993cde5488aa90ea8ce9fdee868732171dbdd" Dec 05 10:51:37 crc kubenswrapper[4796]: E1205 10:51:37.507386 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62244e8150e89399d4dfbff2cee993cde5488aa90ea8ce9fdee868732171dbdd\": container with ID starting with 62244e8150e89399d4dfbff2cee993cde5488aa90ea8ce9fdee868732171dbdd not found: ID does not exist" containerID="62244e8150e89399d4dfbff2cee993cde5488aa90ea8ce9fdee868732171dbdd" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.507429 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62244e8150e89399d4dfbff2cee993cde5488aa90ea8ce9fdee868732171dbdd"} err="failed to get container status \"62244e8150e89399d4dfbff2cee993cde5488aa90ea8ce9fdee868732171dbdd\": rpc error: code = NotFound desc = could not find container \"62244e8150e89399d4dfbff2cee993cde5488aa90ea8ce9fdee868732171dbdd\": container with ID starting with 62244e8150e89399d4dfbff2cee993cde5488aa90ea8ce9fdee868732171dbdd not found: ID does not exist" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.507461 4796 scope.go:117] "RemoveContainer" containerID="34a0c6837ef4a320e55211eb22b67955ffe8416b0da2a06d083ed30609763df7" Dec 05 10:51:37 crc kubenswrapper[4796]: E1205 10:51:37.509241 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34a0c6837ef4a320e55211eb22b67955ffe8416b0da2a06d083ed30609763df7\": container with ID starting with 34a0c6837ef4a320e55211eb22b67955ffe8416b0da2a06d083ed30609763df7 not found: ID does not exist" containerID="34a0c6837ef4a320e55211eb22b67955ffe8416b0da2a06d083ed30609763df7" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.509266 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34a0c6837ef4a320e55211eb22b67955ffe8416b0da2a06d083ed30609763df7"} err="failed to get container status \"34a0c6837ef4a320e55211eb22b67955ffe8416b0da2a06d083ed30609763df7\": rpc error: code = NotFound desc = could not find container \"34a0c6837ef4a320e55211eb22b67955ffe8416b0da2a06d083ed30609763df7\": container with ID starting with 34a0c6837ef4a320e55211eb22b67955ffe8416b0da2a06d083ed30609763df7 not found: ID does not exist" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.509282 4796 scope.go:117] "RemoveContainer" containerID="4a87e88be608cc37455b5ccff4a5d5b2f06c6297112225b110a6a43e6fd0aee2" Dec 05 10:51:37 crc kubenswrapper[4796]: E1205 10:51:37.510016 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a87e88be608cc37455b5ccff4a5d5b2f06c6297112225b110a6a43e6fd0aee2\": container with ID starting with 4a87e88be608cc37455b5ccff4a5d5b2f06c6297112225b110a6a43e6fd0aee2 not found: ID does not exist" containerID="4a87e88be608cc37455b5ccff4a5d5b2f06c6297112225b110a6a43e6fd0aee2" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.510040 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a87e88be608cc37455b5ccff4a5d5b2f06c6297112225b110a6a43e6fd0aee2"} err="failed to get container status \"4a87e88be608cc37455b5ccff4a5d5b2f06c6297112225b110a6a43e6fd0aee2\": rpc error: code = NotFound desc = could not find container \"4a87e88be608cc37455b5ccff4a5d5b2f06c6297112225b110a6a43e6fd0aee2\": container with ID starting with 4a87e88be608cc37455b5ccff4a5d5b2f06c6297112225b110a6a43e6fd0aee2 not found: ID does not exist" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.518930 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/589c89c5-f3fd-44c9-ab63-8b1e4774d28f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6\" (UID: \"589c89c5-f3fd-44c9-ab63-8b1e4774d28f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.519008 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/589c89c5-f3fd-44c9-ab63-8b1e4774d28f-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6\" (UID: \"589c89c5-f3fd-44c9-ab63-8b1e4774d28f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.519249 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkxgw\" (UniqueName: \"kubernetes.io/projected/589c89c5-f3fd-44c9-ab63-8b1e4774d28f-kube-api-access-mkxgw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6\" (UID: \"589c89c5-f3fd-44c9-ab63-8b1e4774d28f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.620168 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkxgw\" (UniqueName: \"kubernetes.io/projected/589c89c5-f3fd-44c9-ab63-8b1e4774d28f-kube-api-access-mkxgw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6\" (UID: \"589c89c5-f3fd-44c9-ab63-8b1e4774d28f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.620237 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/589c89c5-f3fd-44c9-ab63-8b1e4774d28f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6\" (UID: \"589c89c5-f3fd-44c9-ab63-8b1e4774d28f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.620298 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/589c89c5-f3fd-44c9-ab63-8b1e4774d28f-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6\" (UID: \"589c89c5-f3fd-44c9-ab63-8b1e4774d28f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.624645 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/589c89c5-f3fd-44c9-ab63-8b1e4774d28f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6\" (UID: \"589c89c5-f3fd-44c9-ab63-8b1e4774d28f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.624861 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/589c89c5-f3fd-44c9-ab63-8b1e4774d28f-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6\" (UID: \"589c89c5-f3fd-44c9-ab63-8b1e4774d28f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.634033 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkxgw\" (UniqueName: \"kubernetes.io/projected/589c89c5-f3fd-44c9-ab63-8b1e4774d28f-kube-api-access-mkxgw\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6\" (UID: \"589c89c5-f3fd-44c9-ab63-8b1e4774d28f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6" Dec 05 10:51:37 crc kubenswrapper[4796]: I1205 10:51:37.835099 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6" Dec 05 10:51:38 crc kubenswrapper[4796]: I1205 10:51:38.039444 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec00d68e-dea1-4cf0-b162-033f72036514" path="/var/lib/kubelet/pods/ec00d68e-dea1-4cf0-b162-033f72036514/volumes" Dec 05 10:51:38 crc kubenswrapper[4796]: I1205 10:51:38.248129 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6"] Dec 05 10:51:38 crc kubenswrapper[4796]: W1205 10:51:38.249430 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod589c89c5_f3fd_44c9_ab63_8b1e4774d28f.slice/crio-d5b9c214b94397122d21ab134acba8316f9cf25d0631b540159a08ef8039e457 WatchSource:0}: Error finding container d5b9c214b94397122d21ab134acba8316f9cf25d0631b540159a08ef8039e457: Status 404 returned error can't find the container with id d5b9c214b94397122d21ab134acba8316f9cf25d0631b540159a08ef8039e457 Dec 05 10:51:38 crc kubenswrapper[4796]: I1205 10:51:38.422548 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6" event={"ID":"589c89c5-f3fd-44c9-ab63-8b1e4774d28f","Type":"ContainerStarted","Data":"d5b9c214b94397122d21ab134acba8316f9cf25d0631b540159a08ef8039e457"} Dec 05 10:51:39 crc kubenswrapper[4796]: I1205 10:51:39.059378 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-46e3-account-create-4b4vf"] Dec 05 10:51:39 crc kubenswrapper[4796]: I1205 10:51:39.065476 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-46e3-account-create-4b4vf"] Dec 05 10:51:39 crc kubenswrapper[4796]: I1205 10:51:39.431020 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6" event={"ID":"589c89c5-f3fd-44c9-ab63-8b1e4774d28f","Type":"ContainerStarted","Data":"4cde517b3798bfcb15686e066978590673e4a95548af8ac9ad9833936374459a"} Dec 05 10:51:39 crc kubenswrapper[4796]: I1205 10:51:39.444088 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6" podStartSLOduration=1.903536566 podStartE2EDuration="2.444073656s" podCreationTimestamp="2025-12-05 10:51:37 +0000 UTC" firstStartedPulling="2025-12-05 10:51:38.252067611 +0000 UTC m=+1444.540173125" lastFinishedPulling="2025-12-05 10:51:38.792604702 +0000 UTC m=+1445.080710215" observedRunningTime="2025-12-05 10:51:39.442000997 +0000 UTC m=+1445.730106511" watchObservedRunningTime="2025-12-05 10:51:39.444073656 +0000 UTC m=+1445.732179169" Dec 05 10:51:40 crc kubenswrapper[4796]: I1205 10:51:40.039082 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c3e7b5f-0297-41f7-b4c6-0ac3ddb91156" path="/var/lib/kubelet/pods/2c3e7b5f-0297-41f7-b4c6-0ac3ddb91156/volumes" Dec 05 10:51:42 crc kubenswrapper[4796]: I1205 10:51:42.019540 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-qx4tb"] Dec 05 10:51:42 crc kubenswrapper[4796]: I1205 10:51:42.025603 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-qx4tb"] Dec 05 10:51:42 crc kubenswrapper[4796]: I1205 10:51:42.039383 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af696b32-cadf-4959-9c28-21804524ce8b" path="/var/lib/kubelet/pods/af696b32-cadf-4959-9c28-21804524ce8b/volumes" Dec 05 10:51:42 crc kubenswrapper[4796]: I1205 10:51:42.451292 4796 generic.go:334] "Generic (PLEG): container finished" podID="589c89c5-f3fd-44c9-ab63-8b1e4774d28f" containerID="4cde517b3798bfcb15686e066978590673e4a95548af8ac9ad9833936374459a" exitCode=0 Dec 05 10:51:42 crc kubenswrapper[4796]: I1205 10:51:42.451331 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6" event={"ID":"589c89c5-f3fd-44c9-ab63-8b1e4774d28f","Type":"ContainerDied","Data":"4cde517b3798bfcb15686e066978590673e4a95548af8ac9ad9833936374459a"} Dec 05 10:51:43 crc kubenswrapper[4796]: I1205 10:51:43.018402 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-2mbjb"] Dec 05 10:51:43 crc kubenswrapper[4796]: I1205 10:51:43.024134 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-2mbjb"] Dec 05 10:51:43 crc kubenswrapper[4796]: I1205 10:51:43.746359 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6" Dec 05 10:51:43 crc kubenswrapper[4796]: I1205 10:51:43.907215 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkxgw\" (UniqueName: \"kubernetes.io/projected/589c89c5-f3fd-44c9-ab63-8b1e4774d28f-kube-api-access-mkxgw\") pod \"589c89c5-f3fd-44c9-ab63-8b1e4774d28f\" (UID: \"589c89c5-f3fd-44c9-ab63-8b1e4774d28f\") " Dec 05 10:51:43 crc kubenswrapper[4796]: I1205 10:51:43.907507 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/589c89c5-f3fd-44c9-ab63-8b1e4774d28f-ssh-key\") pod \"589c89c5-f3fd-44c9-ab63-8b1e4774d28f\" (UID: \"589c89c5-f3fd-44c9-ab63-8b1e4774d28f\") " Dec 05 10:51:43 crc kubenswrapper[4796]: I1205 10:51:43.907533 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/589c89c5-f3fd-44c9-ab63-8b1e4774d28f-inventory\") pod \"589c89c5-f3fd-44c9-ab63-8b1e4774d28f\" (UID: \"589c89c5-f3fd-44c9-ab63-8b1e4774d28f\") " Dec 05 10:51:43 crc kubenswrapper[4796]: I1205 10:51:43.911578 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/589c89c5-f3fd-44c9-ab63-8b1e4774d28f-kube-api-access-mkxgw" (OuterVolumeSpecName: "kube-api-access-mkxgw") pod "589c89c5-f3fd-44c9-ab63-8b1e4774d28f" (UID: "589c89c5-f3fd-44c9-ab63-8b1e4774d28f"). InnerVolumeSpecName "kube-api-access-mkxgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:51:43 crc kubenswrapper[4796]: I1205 10:51:43.928511 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/589c89c5-f3fd-44c9-ab63-8b1e4774d28f-inventory" (OuterVolumeSpecName: "inventory") pod "589c89c5-f3fd-44c9-ab63-8b1e4774d28f" (UID: "589c89c5-f3fd-44c9-ab63-8b1e4774d28f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:51:43 crc kubenswrapper[4796]: I1205 10:51:43.928627 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/589c89c5-f3fd-44c9-ab63-8b1e4774d28f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "589c89c5-f3fd-44c9-ab63-8b1e4774d28f" (UID: "589c89c5-f3fd-44c9-ab63-8b1e4774d28f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.009058 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkxgw\" (UniqueName: \"kubernetes.io/projected/589c89c5-f3fd-44c9-ab63-8b1e4774d28f-kube-api-access-mkxgw\") on node \"crc\" DevicePath \"\"" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.009083 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/589c89c5-f3fd-44c9-ab63-8b1e4774d28f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.009091 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/589c89c5-f3fd-44c9-ab63-8b1e4774d28f-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.041824 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6" path="/var/lib/kubelet/pods/8b1c27bb-58c0-4631-ba14-f1cddf9ecdf6/volumes" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.463598 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6" event={"ID":"589c89c5-f3fd-44c9-ab63-8b1e4774d28f","Type":"ContainerDied","Data":"d5b9c214b94397122d21ab134acba8316f9cf25d0631b540159a08ef8039e457"} Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.463635 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5b9c214b94397122d21ab134acba8316f9cf25d0631b540159a08ef8039e457" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.463667 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.512807 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktktx"] Dec 05 10:51:44 crc kubenswrapper[4796]: E1205 10:51:44.513254 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="589c89c5-f3fd-44c9-ab63-8b1e4774d28f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.513268 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="589c89c5-f3fd-44c9-ab63-8b1e4774d28f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.513466 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="589c89c5-f3fd-44c9-ab63-8b1e4774d28f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.514030 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktktx" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.515119 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xznq\" (UniqueName: \"kubernetes.io/projected/61349c4c-5e04-4781-bbd0-1e6930083dd1-kube-api-access-6xznq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ktktx\" (UID: \"61349c4c-5e04-4781-bbd0-1e6930083dd1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktktx" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.515161 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61349c4c-5e04-4781-bbd0-1e6930083dd1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ktktx\" (UID: \"61349c4c-5e04-4781-bbd0-1e6930083dd1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktktx" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.515305 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61349c4c-5e04-4781-bbd0-1e6930083dd1-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ktktx\" (UID: \"61349c4c-5e04-4781-bbd0-1e6930083dd1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktktx" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.515419 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.517950 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktktx"] Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.517986 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vmswc" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.519160 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.519504 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.616272 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61349c4c-5e04-4781-bbd0-1e6930083dd1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ktktx\" (UID: \"61349c4c-5e04-4781-bbd0-1e6930083dd1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktktx" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.616572 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61349c4c-5e04-4781-bbd0-1e6930083dd1-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ktktx\" (UID: \"61349c4c-5e04-4781-bbd0-1e6930083dd1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktktx" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.616716 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xznq\" (UniqueName: \"kubernetes.io/projected/61349c4c-5e04-4781-bbd0-1e6930083dd1-kube-api-access-6xznq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ktktx\" (UID: \"61349c4c-5e04-4781-bbd0-1e6930083dd1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktktx" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.619640 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61349c4c-5e04-4781-bbd0-1e6930083dd1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ktktx\" (UID: \"61349c4c-5e04-4781-bbd0-1e6930083dd1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktktx" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.620424 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61349c4c-5e04-4781-bbd0-1e6930083dd1-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ktktx\" (UID: \"61349c4c-5e04-4781-bbd0-1e6930083dd1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktktx" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.630176 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xznq\" (UniqueName: \"kubernetes.io/projected/61349c4c-5e04-4781-bbd0-1e6930083dd1-kube-api-access-6xznq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ktktx\" (UID: \"61349c4c-5e04-4781-bbd0-1e6930083dd1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktktx" Dec 05 10:51:44 crc kubenswrapper[4796]: I1205 10:51:44.826597 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktktx" Dec 05 10:51:45 crc kubenswrapper[4796]: I1205 10:51:45.226259 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktktx"] Dec 05 10:51:45 crc kubenswrapper[4796]: I1205 10:51:45.469985 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktktx" event={"ID":"61349c4c-5e04-4781-bbd0-1e6930083dd1","Type":"ContainerStarted","Data":"50b2052d2181dac7e8c337924a8d16b8c7a11d9573d0195578184e652da75da5"} Dec 05 10:51:46 crc kubenswrapper[4796]: I1205 10:51:46.479049 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktktx" event={"ID":"61349c4c-5e04-4781-bbd0-1e6930083dd1","Type":"ContainerStarted","Data":"2ecdab104de070ddc61cf00fed49b24ca4840cfc530407495d2e42ded30c48be"} Dec 05 10:51:46 crc kubenswrapper[4796]: I1205 10:51:46.495247 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktktx" podStartSLOduration=2.003948857 podStartE2EDuration="2.495234317s" podCreationTimestamp="2025-12-05 10:51:44 +0000 UTC" firstStartedPulling="2025-12-05 10:51:45.232364564 +0000 UTC m=+1451.520470077" lastFinishedPulling="2025-12-05 10:51:45.723650024 +0000 UTC m=+1452.011755537" observedRunningTime="2025-12-05 10:51:46.490134273 +0000 UTC m=+1452.778239785" watchObservedRunningTime="2025-12-05 10:51:46.495234317 +0000 UTC m=+1452.783339830" Dec 05 10:51:57 crc kubenswrapper[4796]: I1205 10:51:57.022260 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-2fftm"] Dec 05 10:51:57 crc kubenswrapper[4796]: I1205 10:51:57.029676 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-gqtd9"] Dec 05 10:51:57 crc kubenswrapper[4796]: I1205 10:51:57.035213 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-2fftm"] Dec 05 10:51:57 crc kubenswrapper[4796]: I1205 10:51:57.040021 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-gqtd9"] Dec 05 10:51:58 crc kubenswrapper[4796]: I1205 10:51:58.049087 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b644d420-f868-46a7-9e03-1d6fc4f78894" path="/var/lib/kubelet/pods/b644d420-f868-46a7-9e03-1d6fc4f78894/volumes" Dec 05 10:51:58 crc kubenswrapper[4796]: I1205 10:51:58.049614 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2395df9-6a76-405e-b346-ee634afb272c" path="/var/lib/kubelet/pods/d2395df9-6a76-405e-b346-ee634afb272c/volumes" Dec 05 10:52:05 crc kubenswrapper[4796]: I1205 10:52:05.177444 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:52:05 crc kubenswrapper[4796]: I1205 10:52:05.177919 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:52:06 crc kubenswrapper[4796]: I1205 10:52:06.282789 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nhtsw"] Dec 05 10:52:06 crc kubenswrapper[4796]: I1205 10:52:06.284797 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhtsw" Dec 05 10:52:06 crc kubenswrapper[4796]: I1205 10:52:06.291500 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nhtsw"] Dec 05 10:52:06 crc kubenswrapper[4796]: I1205 10:52:06.326266 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/280015ca-0c72-44fc-9644-96fba920999e-catalog-content\") pod \"redhat-operators-nhtsw\" (UID: \"280015ca-0c72-44fc-9644-96fba920999e\") " pod="openshift-marketplace/redhat-operators-nhtsw" Dec 05 10:52:06 crc kubenswrapper[4796]: I1205 10:52:06.326355 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/280015ca-0c72-44fc-9644-96fba920999e-utilities\") pod \"redhat-operators-nhtsw\" (UID: \"280015ca-0c72-44fc-9644-96fba920999e\") " pod="openshift-marketplace/redhat-operators-nhtsw" Dec 05 10:52:06 crc kubenswrapper[4796]: I1205 10:52:06.326417 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwsfk\" (UniqueName: \"kubernetes.io/projected/280015ca-0c72-44fc-9644-96fba920999e-kube-api-access-kwsfk\") pod \"redhat-operators-nhtsw\" (UID: \"280015ca-0c72-44fc-9644-96fba920999e\") " pod="openshift-marketplace/redhat-operators-nhtsw" Dec 05 10:52:06 crc kubenswrapper[4796]: I1205 10:52:06.428051 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/280015ca-0c72-44fc-9644-96fba920999e-catalog-content\") pod \"redhat-operators-nhtsw\" (UID: \"280015ca-0c72-44fc-9644-96fba920999e\") " pod="openshift-marketplace/redhat-operators-nhtsw" Dec 05 10:52:06 crc kubenswrapper[4796]: I1205 10:52:06.428152 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/280015ca-0c72-44fc-9644-96fba920999e-utilities\") pod \"redhat-operators-nhtsw\" (UID: \"280015ca-0c72-44fc-9644-96fba920999e\") " pod="openshift-marketplace/redhat-operators-nhtsw" Dec 05 10:52:06 crc kubenswrapper[4796]: I1205 10:52:06.428227 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwsfk\" (UniqueName: \"kubernetes.io/projected/280015ca-0c72-44fc-9644-96fba920999e-kube-api-access-kwsfk\") pod \"redhat-operators-nhtsw\" (UID: \"280015ca-0c72-44fc-9644-96fba920999e\") " pod="openshift-marketplace/redhat-operators-nhtsw" Dec 05 10:52:06 crc kubenswrapper[4796]: I1205 10:52:06.428541 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/280015ca-0c72-44fc-9644-96fba920999e-catalog-content\") pod \"redhat-operators-nhtsw\" (UID: \"280015ca-0c72-44fc-9644-96fba920999e\") " pod="openshift-marketplace/redhat-operators-nhtsw" Dec 05 10:52:06 crc kubenswrapper[4796]: I1205 10:52:06.428728 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/280015ca-0c72-44fc-9644-96fba920999e-utilities\") pod \"redhat-operators-nhtsw\" (UID: \"280015ca-0c72-44fc-9644-96fba920999e\") " pod="openshift-marketplace/redhat-operators-nhtsw" Dec 05 10:52:06 crc kubenswrapper[4796]: I1205 10:52:06.443671 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwsfk\" (UniqueName: \"kubernetes.io/projected/280015ca-0c72-44fc-9644-96fba920999e-kube-api-access-kwsfk\") pod \"redhat-operators-nhtsw\" (UID: \"280015ca-0c72-44fc-9644-96fba920999e\") " pod="openshift-marketplace/redhat-operators-nhtsw" Dec 05 10:52:06 crc kubenswrapper[4796]: I1205 10:52:06.608624 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhtsw" Dec 05 10:52:06 crc kubenswrapper[4796]: I1205 10:52:06.994367 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nhtsw"] Dec 05 10:52:07 crc kubenswrapper[4796]: I1205 10:52:07.605186 4796 generic.go:334] "Generic (PLEG): container finished" podID="280015ca-0c72-44fc-9644-96fba920999e" containerID="f1bfdcd88df64f6badc0d70c6380786c640d83c51680436236334f2f507f0e79" exitCode=0 Dec 05 10:52:07 crc kubenswrapper[4796]: I1205 10:52:07.605241 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhtsw" event={"ID":"280015ca-0c72-44fc-9644-96fba920999e","Type":"ContainerDied","Data":"f1bfdcd88df64f6badc0d70c6380786c640d83c51680436236334f2f507f0e79"} Dec 05 10:52:07 crc kubenswrapper[4796]: I1205 10:52:07.605471 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhtsw" event={"ID":"280015ca-0c72-44fc-9644-96fba920999e","Type":"ContainerStarted","Data":"4855c39146f36bdc1a687614a58f514305147be7324a68b1eda6b5ea5200c6fd"} Dec 05 10:52:08 crc kubenswrapper[4796]: I1205 10:52:08.613773 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhtsw" event={"ID":"280015ca-0c72-44fc-9644-96fba920999e","Type":"ContainerStarted","Data":"b3e0f1ff47be4d123b66bac2dd01fb08a5771d0995691656bf892c7354a85104"} Dec 05 10:52:09 crc kubenswrapper[4796]: I1205 10:52:09.621056 4796 generic.go:334] "Generic (PLEG): container finished" podID="280015ca-0c72-44fc-9644-96fba920999e" containerID="b3e0f1ff47be4d123b66bac2dd01fb08a5771d0995691656bf892c7354a85104" exitCode=0 Dec 05 10:52:09 crc kubenswrapper[4796]: I1205 10:52:09.621096 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhtsw" event={"ID":"280015ca-0c72-44fc-9644-96fba920999e","Type":"ContainerDied","Data":"b3e0f1ff47be4d123b66bac2dd01fb08a5771d0995691656bf892c7354a85104"} Dec 05 10:52:10 crc kubenswrapper[4796]: I1205 10:52:10.629696 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhtsw" event={"ID":"280015ca-0c72-44fc-9644-96fba920999e","Type":"ContainerStarted","Data":"c34d3c1342820f749e3a0b84dfd1f99bbaf8fb5d07c93ad273402c1b80c76f49"} Dec 05 10:52:10 crc kubenswrapper[4796]: I1205 10:52:10.646854 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nhtsw" podStartSLOduration=2.146112313 podStartE2EDuration="4.646843893s" podCreationTimestamp="2025-12-05 10:52:06 +0000 UTC" firstStartedPulling="2025-12-05 10:52:07.606408661 +0000 UTC m=+1473.894514173" lastFinishedPulling="2025-12-05 10:52:10.10714024 +0000 UTC m=+1476.395245753" observedRunningTime="2025-12-05 10:52:10.643380908 +0000 UTC m=+1476.931486421" watchObservedRunningTime="2025-12-05 10:52:10.646843893 +0000 UTC m=+1476.934949406" Dec 05 10:52:11 crc kubenswrapper[4796]: I1205 10:52:11.636747 4796 generic.go:334] "Generic (PLEG): container finished" podID="61349c4c-5e04-4781-bbd0-1e6930083dd1" containerID="2ecdab104de070ddc61cf00fed49b24ca4840cfc530407495d2e42ded30c48be" exitCode=0 Dec 05 10:52:11 crc kubenswrapper[4796]: I1205 10:52:11.636827 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktktx" event={"ID":"61349c4c-5e04-4781-bbd0-1e6930083dd1","Type":"ContainerDied","Data":"2ecdab104de070ddc61cf00fed49b24ca4840cfc530407495d2e42ded30c48be"} Dec 05 10:52:12 crc kubenswrapper[4796]: I1205 10:52:12.933727 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktktx" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.127629 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xznq\" (UniqueName: \"kubernetes.io/projected/61349c4c-5e04-4781-bbd0-1e6930083dd1-kube-api-access-6xznq\") pod \"61349c4c-5e04-4781-bbd0-1e6930083dd1\" (UID: \"61349c4c-5e04-4781-bbd0-1e6930083dd1\") " Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.128201 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61349c4c-5e04-4781-bbd0-1e6930083dd1-inventory\") pod \"61349c4c-5e04-4781-bbd0-1e6930083dd1\" (UID: \"61349c4c-5e04-4781-bbd0-1e6930083dd1\") " Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.128236 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61349c4c-5e04-4781-bbd0-1e6930083dd1-ssh-key\") pod \"61349c4c-5e04-4781-bbd0-1e6930083dd1\" (UID: \"61349c4c-5e04-4781-bbd0-1e6930083dd1\") " Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.132340 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61349c4c-5e04-4781-bbd0-1e6930083dd1-kube-api-access-6xznq" (OuterVolumeSpecName: "kube-api-access-6xznq") pod "61349c4c-5e04-4781-bbd0-1e6930083dd1" (UID: "61349c4c-5e04-4781-bbd0-1e6930083dd1"). InnerVolumeSpecName "kube-api-access-6xznq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.148429 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61349c4c-5e04-4781-bbd0-1e6930083dd1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "61349c4c-5e04-4781-bbd0-1e6930083dd1" (UID: "61349c4c-5e04-4781-bbd0-1e6930083dd1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.149033 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61349c4c-5e04-4781-bbd0-1e6930083dd1-inventory" (OuterVolumeSpecName: "inventory") pod "61349c4c-5e04-4781-bbd0-1e6930083dd1" (UID: "61349c4c-5e04-4781-bbd0-1e6930083dd1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.229976 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xznq\" (UniqueName: \"kubernetes.io/projected/61349c4c-5e04-4781-bbd0-1e6930083dd1-kube-api-access-6xznq\") on node \"crc\" DevicePath \"\"" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.230007 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/61349c4c-5e04-4781-bbd0-1e6930083dd1-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.230016 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61349c4c-5e04-4781-bbd0-1e6930083dd1-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.656884 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktktx" event={"ID":"61349c4c-5e04-4781-bbd0-1e6930083dd1","Type":"ContainerDied","Data":"50b2052d2181dac7e8c337924a8d16b8c7a11d9573d0195578184e652da75da5"} Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.657126 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50b2052d2181dac7e8c337924a8d16b8c7a11d9573d0195578184e652da75da5" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.656919 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ktktx" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.708479 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6mflz"] Dec 05 10:52:13 crc kubenswrapper[4796]: E1205 10:52:13.708838 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61349c4c-5e04-4781-bbd0-1e6930083dd1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.708859 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="61349c4c-5e04-4781-bbd0-1e6930083dd1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.709025 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="61349c4c-5e04-4781-bbd0-1e6930083dd1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.709569 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6mflz" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.711553 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.711832 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.712729 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.713103 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vmswc" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.718626 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6mflz"] Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.736484 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4f493f2-c177-4784-8c6c-07c52336c07a-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6mflz\" (UID: \"b4f493f2-c177-4784-8c6c-07c52336c07a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6mflz" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.736520 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb6mf\" (UniqueName: \"kubernetes.io/projected/b4f493f2-c177-4784-8c6c-07c52336c07a-kube-api-access-rb6mf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6mflz\" (UID: \"b4f493f2-c177-4784-8c6c-07c52336c07a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6mflz" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.736610 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4f493f2-c177-4784-8c6c-07c52336c07a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6mflz\" (UID: \"b4f493f2-c177-4784-8c6c-07c52336c07a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6mflz" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.837668 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4f493f2-c177-4784-8c6c-07c52336c07a-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6mflz\" (UID: \"b4f493f2-c177-4784-8c6c-07c52336c07a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6mflz" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.837739 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6mf\" (UniqueName: \"kubernetes.io/projected/b4f493f2-c177-4784-8c6c-07c52336c07a-kube-api-access-rb6mf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6mflz\" (UID: \"b4f493f2-c177-4784-8c6c-07c52336c07a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6mflz" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.837829 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4f493f2-c177-4784-8c6c-07c52336c07a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6mflz\" (UID: \"b4f493f2-c177-4784-8c6c-07c52336c07a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6mflz" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.840916 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4f493f2-c177-4784-8c6c-07c52336c07a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6mflz\" (UID: \"b4f493f2-c177-4784-8c6c-07c52336c07a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6mflz" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.841141 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4f493f2-c177-4784-8c6c-07c52336c07a-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6mflz\" (UID: \"b4f493f2-c177-4784-8c6c-07c52336c07a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6mflz" Dec 05 10:52:13 crc kubenswrapper[4796]: I1205 10:52:13.851149 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb6mf\" (UniqueName: \"kubernetes.io/projected/b4f493f2-c177-4784-8c6c-07c52336c07a-kube-api-access-rb6mf\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6mflz\" (UID: \"b4f493f2-c177-4784-8c6c-07c52336c07a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6mflz" Dec 05 10:52:14 crc kubenswrapper[4796]: I1205 10:52:14.023897 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6mflz" Dec 05 10:52:14 crc kubenswrapper[4796]: I1205 10:52:14.485018 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6mflz"] Dec 05 10:52:14 crc kubenswrapper[4796]: W1205 10:52:14.487383 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4f493f2_c177_4784_8c6c_07c52336c07a.slice/crio-171a02064eeac4f4584a3f62b9a1e86ff54c9b974f6558b4417130a1f159f49d WatchSource:0}: Error finding container 171a02064eeac4f4584a3f62b9a1e86ff54c9b974f6558b4417130a1f159f49d: Status 404 returned error can't find the container with id 171a02064eeac4f4584a3f62b9a1e86ff54c9b974f6558b4417130a1f159f49d Dec 05 10:52:14 crc kubenswrapper[4796]: I1205 10:52:14.667750 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6mflz" event={"ID":"b4f493f2-c177-4784-8c6c-07c52336c07a","Type":"ContainerStarted","Data":"171a02064eeac4f4584a3f62b9a1e86ff54c9b974f6558b4417130a1f159f49d"} Dec 05 10:52:15 crc kubenswrapper[4796]: I1205 10:52:15.677792 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6mflz" event={"ID":"b4f493f2-c177-4784-8c6c-07c52336c07a","Type":"ContainerStarted","Data":"4eafabf62fac601fb9c12a0d63601a5238bf159f6d38a29f52740acbc22614a8"} Dec 05 10:52:15 crc kubenswrapper[4796]: I1205 10:52:15.700816 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6mflz" podStartSLOduration=2.211780765 podStartE2EDuration="2.700800071s" podCreationTimestamp="2025-12-05 10:52:13 +0000 UTC" firstStartedPulling="2025-12-05 10:52:14.489593517 +0000 UTC m=+1480.777699030" lastFinishedPulling="2025-12-05 10:52:14.978612824 +0000 UTC m=+1481.266718336" observedRunningTime="2025-12-05 10:52:15.694426011 +0000 UTC m=+1481.982531524" watchObservedRunningTime="2025-12-05 10:52:15.700800071 +0000 UTC m=+1481.988905584" Dec 05 10:52:16 crc kubenswrapper[4796]: I1205 10:52:16.038296 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-qh5mr"] Dec 05 10:52:16 crc kubenswrapper[4796]: I1205 10:52:16.039469 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-qh5mr"] Dec 05 10:52:16 crc kubenswrapper[4796]: I1205 10:52:16.609457 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nhtsw" Dec 05 10:52:16 crc kubenswrapper[4796]: I1205 10:52:16.609507 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nhtsw" Dec 05 10:52:16 crc kubenswrapper[4796]: I1205 10:52:16.653559 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nhtsw" Dec 05 10:52:16 crc kubenswrapper[4796]: I1205 10:52:16.720048 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nhtsw" Dec 05 10:52:16 crc kubenswrapper[4796]: I1205 10:52:16.887196 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nhtsw"] Dec 05 10:52:18 crc kubenswrapper[4796]: I1205 10:52:18.040151 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04e95ca1-d131-4528-baaf-0be6b98a5edf" path="/var/lib/kubelet/pods/04e95ca1-d131-4528-baaf-0be6b98a5edf/volumes" Dec 05 10:52:18 crc kubenswrapper[4796]: I1205 10:52:18.698219 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nhtsw" podUID="280015ca-0c72-44fc-9644-96fba920999e" containerName="registry-server" containerID="cri-o://c34d3c1342820f749e3a0b84dfd1f99bbaf8fb5d07c93ad273402c1b80c76f49" gracePeriod=2 Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.089515 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhtsw" Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.252996 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/280015ca-0c72-44fc-9644-96fba920999e-utilities\") pod \"280015ca-0c72-44fc-9644-96fba920999e\" (UID: \"280015ca-0c72-44fc-9644-96fba920999e\") " Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.253097 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwsfk\" (UniqueName: \"kubernetes.io/projected/280015ca-0c72-44fc-9644-96fba920999e-kube-api-access-kwsfk\") pod \"280015ca-0c72-44fc-9644-96fba920999e\" (UID: \"280015ca-0c72-44fc-9644-96fba920999e\") " Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.253327 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/280015ca-0c72-44fc-9644-96fba920999e-catalog-content\") pod \"280015ca-0c72-44fc-9644-96fba920999e\" (UID: \"280015ca-0c72-44fc-9644-96fba920999e\") " Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.253787 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/280015ca-0c72-44fc-9644-96fba920999e-utilities" (OuterVolumeSpecName: "utilities") pod "280015ca-0c72-44fc-9644-96fba920999e" (UID: "280015ca-0c72-44fc-9644-96fba920999e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.259259 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/280015ca-0c72-44fc-9644-96fba920999e-kube-api-access-kwsfk" (OuterVolumeSpecName: "kube-api-access-kwsfk") pod "280015ca-0c72-44fc-9644-96fba920999e" (UID: "280015ca-0c72-44fc-9644-96fba920999e"). InnerVolumeSpecName "kube-api-access-kwsfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.328211 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/280015ca-0c72-44fc-9644-96fba920999e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "280015ca-0c72-44fc-9644-96fba920999e" (UID: "280015ca-0c72-44fc-9644-96fba920999e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.356206 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/280015ca-0c72-44fc-9644-96fba920999e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.356235 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/280015ca-0c72-44fc-9644-96fba920999e-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.356247 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwsfk\" (UniqueName: \"kubernetes.io/projected/280015ca-0c72-44fc-9644-96fba920999e-kube-api-access-kwsfk\") on node \"crc\" DevicePath \"\"" Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.709925 4796 generic.go:334] "Generic (PLEG): container finished" podID="280015ca-0c72-44fc-9644-96fba920999e" containerID="c34d3c1342820f749e3a0b84dfd1f99bbaf8fb5d07c93ad273402c1b80c76f49" exitCode=0 Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.709981 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhtsw" event={"ID":"280015ca-0c72-44fc-9644-96fba920999e","Type":"ContainerDied","Data":"c34d3c1342820f749e3a0b84dfd1f99bbaf8fb5d07c93ad273402c1b80c76f49"} Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.710010 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhtsw" event={"ID":"280015ca-0c72-44fc-9644-96fba920999e","Type":"ContainerDied","Data":"4855c39146f36bdc1a687614a58f514305147be7324a68b1eda6b5ea5200c6fd"} Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.710010 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhtsw" Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.710029 4796 scope.go:117] "RemoveContainer" containerID="c34d3c1342820f749e3a0b84dfd1f99bbaf8fb5d07c93ad273402c1b80c76f49" Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.727115 4796 scope.go:117] "RemoveContainer" containerID="b3e0f1ff47be4d123b66bac2dd01fb08a5771d0995691656bf892c7354a85104" Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.740730 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nhtsw"] Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.749800 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nhtsw"] Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.750824 4796 scope.go:117] "RemoveContainer" containerID="f1bfdcd88df64f6badc0d70c6380786c640d83c51680436236334f2f507f0e79" Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.784213 4796 scope.go:117] "RemoveContainer" containerID="c34d3c1342820f749e3a0b84dfd1f99bbaf8fb5d07c93ad273402c1b80c76f49" Dec 05 10:52:19 crc kubenswrapper[4796]: E1205 10:52:19.784846 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c34d3c1342820f749e3a0b84dfd1f99bbaf8fb5d07c93ad273402c1b80c76f49\": container with ID starting with c34d3c1342820f749e3a0b84dfd1f99bbaf8fb5d07c93ad273402c1b80c76f49 not found: ID does not exist" containerID="c34d3c1342820f749e3a0b84dfd1f99bbaf8fb5d07c93ad273402c1b80c76f49" Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.784891 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c34d3c1342820f749e3a0b84dfd1f99bbaf8fb5d07c93ad273402c1b80c76f49"} err="failed to get container status \"c34d3c1342820f749e3a0b84dfd1f99bbaf8fb5d07c93ad273402c1b80c76f49\": rpc error: code = NotFound desc = could not find container \"c34d3c1342820f749e3a0b84dfd1f99bbaf8fb5d07c93ad273402c1b80c76f49\": container with ID starting with c34d3c1342820f749e3a0b84dfd1f99bbaf8fb5d07c93ad273402c1b80c76f49 not found: ID does not exist" Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.784921 4796 scope.go:117] "RemoveContainer" containerID="b3e0f1ff47be4d123b66bac2dd01fb08a5771d0995691656bf892c7354a85104" Dec 05 10:52:19 crc kubenswrapper[4796]: E1205 10:52:19.785390 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3e0f1ff47be4d123b66bac2dd01fb08a5771d0995691656bf892c7354a85104\": container with ID starting with b3e0f1ff47be4d123b66bac2dd01fb08a5771d0995691656bf892c7354a85104 not found: ID does not exist" containerID="b3e0f1ff47be4d123b66bac2dd01fb08a5771d0995691656bf892c7354a85104" Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.785415 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3e0f1ff47be4d123b66bac2dd01fb08a5771d0995691656bf892c7354a85104"} err="failed to get container status \"b3e0f1ff47be4d123b66bac2dd01fb08a5771d0995691656bf892c7354a85104\": rpc error: code = NotFound desc = could not find container \"b3e0f1ff47be4d123b66bac2dd01fb08a5771d0995691656bf892c7354a85104\": container with ID starting with b3e0f1ff47be4d123b66bac2dd01fb08a5771d0995691656bf892c7354a85104 not found: ID does not exist" Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.785428 4796 scope.go:117] "RemoveContainer" containerID="f1bfdcd88df64f6badc0d70c6380786c640d83c51680436236334f2f507f0e79" Dec 05 10:52:19 crc kubenswrapper[4796]: E1205 10:52:19.785873 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1bfdcd88df64f6badc0d70c6380786c640d83c51680436236334f2f507f0e79\": container with ID starting with f1bfdcd88df64f6badc0d70c6380786c640d83c51680436236334f2f507f0e79 not found: ID does not exist" containerID="f1bfdcd88df64f6badc0d70c6380786c640d83c51680436236334f2f507f0e79" Dec 05 10:52:19 crc kubenswrapper[4796]: I1205 10:52:19.785911 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1bfdcd88df64f6badc0d70c6380786c640d83c51680436236334f2f507f0e79"} err="failed to get container status \"f1bfdcd88df64f6badc0d70c6380786c640d83c51680436236334f2f507f0e79\": rpc error: code = NotFound desc = could not find container \"f1bfdcd88df64f6badc0d70c6380786c640d83c51680436236334f2f507f0e79\": container with ID starting with f1bfdcd88df64f6badc0d70c6380786c640d83c51680436236334f2f507f0e79 not found: ID does not exist" Dec 05 10:52:20 crc kubenswrapper[4796]: I1205 10:52:20.045068 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="280015ca-0c72-44fc-9644-96fba920999e" path="/var/lib/kubelet/pods/280015ca-0c72-44fc-9644-96fba920999e/volumes" Dec 05 10:52:31 crc kubenswrapper[4796]: I1205 10:52:31.033513 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-pzwzv"] Dec 05 10:52:31 crc kubenswrapper[4796]: I1205 10:52:31.041545 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-ggsqf"] Dec 05 10:52:31 crc kubenswrapper[4796]: I1205 10:52:31.050673 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-7pd69"] Dec 05 10:52:31 crc kubenswrapper[4796]: I1205 10:52:31.054400 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-pzwzv"] Dec 05 10:52:31 crc kubenswrapper[4796]: I1205 10:52:31.059363 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-7pd69"] Dec 05 10:52:31 crc kubenswrapper[4796]: I1205 10:52:31.064245 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-ggsqf"] Dec 05 10:52:32 crc kubenswrapper[4796]: I1205 10:52:32.040851 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15729ebc-4805-4ccd-a70b-ed95183246be" path="/var/lib/kubelet/pods/15729ebc-4805-4ccd-a70b-ed95183246be/volumes" Dec 05 10:52:32 crc kubenswrapper[4796]: I1205 10:52:32.041587 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a11a6bb-3600-4222-b30e-d78931484d32" path="/var/lib/kubelet/pods/1a11a6bb-3600-4222-b30e-d78931484d32/volumes" Dec 05 10:52:32 crc kubenswrapper[4796]: I1205 10:52:32.042093 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b275fa6a-f8a0-4456-9ce0-d21a9cfe9f25" path="/var/lib/kubelet/pods/b275fa6a-f8a0-4456-9ce0-d21a9cfe9f25/volumes" Dec 05 10:52:35 crc kubenswrapper[4796]: I1205 10:52:35.177615 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 10:52:35 crc kubenswrapper[4796]: I1205 10:52:35.178235 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 10:52:35 crc kubenswrapper[4796]: I1205 10:52:35.178293 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 10:52:35 crc kubenswrapper[4796]: I1205 10:52:35.179172 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03"} pod="openshift-machine-config-operator/machine-config-daemon-9pllw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 10:52:35 crc kubenswrapper[4796]: I1205 10:52:35.179234 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" containerID="cri-o://c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" gracePeriod=600 Dec 05 10:52:35 crc kubenswrapper[4796]: E1205 10:52:35.301032 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:52:35 crc kubenswrapper[4796]: I1205 10:52:35.897558 4796 generic.go:334] "Generic (PLEG): container finished" podID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" exitCode=0 Dec 05 10:52:35 crc kubenswrapper[4796]: I1205 10:52:35.897657 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerDied","Data":"c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03"} Dec 05 10:52:35 crc kubenswrapper[4796]: I1205 10:52:35.898133 4796 scope.go:117] "RemoveContainer" containerID="960ca04fdaec51a11e5104bd66b840084c3f85c581e931a71e4da412d7f92d59" Dec 05 10:52:35 crc kubenswrapper[4796]: I1205 10:52:35.899327 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:52:35 crc kubenswrapper[4796]: E1205 10:52:35.899718 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:52:36 crc kubenswrapper[4796]: I1205 10:52:36.770939 4796 scope.go:117] "RemoveContainer" containerID="d2f143411097229a08a3e44dae20242bf337ca92d305e9cd0a5a436ed316f343" Dec 05 10:52:36 crc kubenswrapper[4796]: I1205 10:52:36.797628 4796 scope.go:117] "RemoveContainer" containerID="400d6ee6c59313c7030142b9e3451c7046a5db8f67c0a072693dfe1ddf2596f3" Dec 05 10:52:36 crc kubenswrapper[4796]: I1205 10:52:36.839399 4796 scope.go:117] "RemoveContainer" containerID="bf00b761b856b10a4951e03da043f3a1511867016c55a38f57c935043bb69fe5" Dec 05 10:52:36 crc kubenswrapper[4796]: I1205 10:52:36.864284 4796 scope.go:117] "RemoveContainer" containerID="c7a5086aa98121866d5c5433b7f17b603bc95e3f6ef15f907b176cd2da32b1ee" Dec 05 10:52:36 crc kubenswrapper[4796]: I1205 10:52:36.941721 4796 scope.go:117] "RemoveContainer" containerID="b5ca45481fee61e47936682215a8abf913a4be10b620f8e09d6b06b44dc0161d" Dec 05 10:52:36 crc kubenswrapper[4796]: I1205 10:52:36.968815 4796 scope.go:117] "RemoveContainer" containerID="49db4fe06cf15ba77e62b78053731267566265b5f85c4ce01789e65926546d28" Dec 05 10:52:36 crc kubenswrapper[4796]: I1205 10:52:36.997481 4796 scope.go:117] "RemoveContainer" containerID="a6eccc45b6647bef041c33d62a9fc5f67eab2da4a5c4f8e725cd959619383c16" Dec 05 10:52:37 crc kubenswrapper[4796]: I1205 10:52:37.029331 4796 scope.go:117] "RemoveContainer" containerID="d2a7c7a81c273c72ab77d7542bb0d9da5a813aaa32bab8cc5356a140628e6fce" Dec 05 10:52:37 crc kubenswrapper[4796]: I1205 10:52:37.047988 4796 scope.go:117] "RemoveContainer" containerID="48ff5b23dafcbc2a56d1da5ed9be272255df80129833f9880f658f5c376a34ab" Dec 05 10:52:47 crc kubenswrapper[4796]: I1205 10:52:47.031908 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:52:47 crc kubenswrapper[4796]: I1205 10:52:47.032083 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-564f-account-create-pxxp8"] Dec 05 10:52:47 crc kubenswrapper[4796]: E1205 10:52:47.032615 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:52:47 crc kubenswrapper[4796]: I1205 10:52:47.038001 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-4868-account-create-l5rm9"] Dec 05 10:52:47 crc kubenswrapper[4796]: I1205 10:52:47.043086 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-4868-account-create-l5rm9"] Dec 05 10:52:47 crc kubenswrapper[4796]: I1205 10:52:47.049259 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-564f-account-create-pxxp8"] Dec 05 10:52:47 crc kubenswrapper[4796]: I1205 10:52:47.054301 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-f7bc-account-create-mgfrs"] Dec 05 10:52:47 crc kubenswrapper[4796]: I1205 10:52:47.058899 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-f7bc-account-create-mgfrs"] Dec 05 10:52:48 crc kubenswrapper[4796]: I1205 10:52:48.042870 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c6959f5-c89c-47a0-ac74-03e207adc303" path="/var/lib/kubelet/pods/7c6959f5-c89c-47a0-ac74-03e207adc303/volumes" Dec 05 10:52:48 crc kubenswrapper[4796]: I1205 10:52:48.043372 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cf65f7c-cddb-463c-9be0-939961c5e902" path="/var/lib/kubelet/pods/7cf65f7c-cddb-463c-9be0-939961c5e902/volumes" Dec 05 10:52:48 crc kubenswrapper[4796]: I1205 10:52:48.043846 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d42b0029-cd05-4ffc-99cc-b6230c464e58" path="/var/lib/kubelet/pods/d42b0029-cd05-4ffc-99cc-b6230c464e58/volumes" Dec 05 10:52:51 crc kubenswrapper[4796]: I1205 10:52:51.069610 4796 generic.go:334] "Generic (PLEG): container finished" podID="b4f493f2-c177-4784-8c6c-07c52336c07a" containerID="4eafabf62fac601fb9c12a0d63601a5238bf159f6d38a29f52740acbc22614a8" exitCode=0 Dec 05 10:52:51 crc kubenswrapper[4796]: I1205 10:52:51.069712 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6mflz" event={"ID":"b4f493f2-c177-4784-8c6c-07c52336c07a","Type":"ContainerDied","Data":"4eafabf62fac601fb9c12a0d63601a5238bf159f6d38a29f52740acbc22614a8"} Dec 05 10:52:52 crc kubenswrapper[4796]: I1205 10:52:52.407669 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6mflz" Dec 05 10:52:52 crc kubenswrapper[4796]: I1205 10:52:52.446260 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4f493f2-c177-4784-8c6c-07c52336c07a-inventory\") pod \"b4f493f2-c177-4784-8c6c-07c52336c07a\" (UID: \"b4f493f2-c177-4784-8c6c-07c52336c07a\") " Dec 05 10:52:52 crc kubenswrapper[4796]: I1205 10:52:52.446460 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb6mf\" (UniqueName: \"kubernetes.io/projected/b4f493f2-c177-4784-8c6c-07c52336c07a-kube-api-access-rb6mf\") pod \"b4f493f2-c177-4784-8c6c-07c52336c07a\" (UID: \"b4f493f2-c177-4784-8c6c-07c52336c07a\") " Dec 05 10:52:52 crc kubenswrapper[4796]: I1205 10:52:52.446564 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4f493f2-c177-4784-8c6c-07c52336c07a-ssh-key\") pod \"b4f493f2-c177-4784-8c6c-07c52336c07a\" (UID: \"b4f493f2-c177-4784-8c6c-07c52336c07a\") " Dec 05 10:52:52 crc kubenswrapper[4796]: I1205 10:52:52.452555 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4f493f2-c177-4784-8c6c-07c52336c07a-kube-api-access-rb6mf" (OuterVolumeSpecName: "kube-api-access-rb6mf") pod "b4f493f2-c177-4784-8c6c-07c52336c07a" (UID: "b4f493f2-c177-4784-8c6c-07c52336c07a"). InnerVolumeSpecName "kube-api-access-rb6mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:52:52 crc kubenswrapper[4796]: I1205 10:52:52.473057 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4f493f2-c177-4784-8c6c-07c52336c07a-inventory" (OuterVolumeSpecName: "inventory") pod "b4f493f2-c177-4784-8c6c-07c52336c07a" (UID: "b4f493f2-c177-4784-8c6c-07c52336c07a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:52:52 crc kubenswrapper[4796]: I1205 10:52:52.474916 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4f493f2-c177-4784-8c6c-07c52336c07a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b4f493f2-c177-4784-8c6c-07c52336c07a" (UID: "b4f493f2-c177-4784-8c6c-07c52336c07a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:52:52 crc kubenswrapper[4796]: I1205 10:52:52.549638 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb6mf\" (UniqueName: \"kubernetes.io/projected/b4f493f2-c177-4784-8c6c-07c52336c07a-kube-api-access-rb6mf\") on node \"crc\" DevicePath \"\"" Dec 05 10:52:52 crc kubenswrapper[4796]: I1205 10:52:52.549667 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b4f493f2-c177-4784-8c6c-07c52336c07a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 10:52:52 crc kubenswrapper[4796]: I1205 10:52:52.549678 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b4f493f2-c177-4784-8c6c-07c52336c07a-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.085515 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6mflz" event={"ID":"b4f493f2-c177-4784-8c6c-07c52336c07a","Type":"ContainerDied","Data":"171a02064eeac4f4584a3f62b9a1e86ff54c9b974f6558b4417130a1f159f49d"} Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.085894 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="171a02064eeac4f4584a3f62b9a1e86ff54c9b974f6558b4417130a1f159f49d" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.085553 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6mflz" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.183245 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-t9tj8"] Dec 05 10:52:53 crc kubenswrapper[4796]: E1205 10:52:53.183573 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="280015ca-0c72-44fc-9644-96fba920999e" containerName="extract-content" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.183592 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="280015ca-0c72-44fc-9644-96fba920999e" containerName="extract-content" Dec 05 10:52:53 crc kubenswrapper[4796]: E1205 10:52:53.183612 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="280015ca-0c72-44fc-9644-96fba920999e" containerName="registry-server" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.183619 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="280015ca-0c72-44fc-9644-96fba920999e" containerName="registry-server" Dec 05 10:52:53 crc kubenswrapper[4796]: E1205 10:52:53.183629 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4f493f2-c177-4784-8c6c-07c52336c07a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.183636 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4f493f2-c177-4784-8c6c-07c52336c07a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 05 10:52:53 crc kubenswrapper[4796]: E1205 10:52:53.183644 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="280015ca-0c72-44fc-9644-96fba920999e" containerName="extract-utilities" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.183651 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="280015ca-0c72-44fc-9644-96fba920999e" containerName="extract-utilities" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.183837 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4f493f2-c177-4784-8c6c-07c52336c07a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.183860 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="280015ca-0c72-44fc-9644-96fba920999e" containerName="registry-server" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.184382 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-t9tj8" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.186125 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.187219 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.187381 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vmswc" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.194936 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-t9tj8"] Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.196279 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.264213 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncwbn\" (UniqueName: \"kubernetes.io/projected/9f94d556-2c7c-42db-8c80-521d055ccc68-kube-api-access-ncwbn\") pod \"ssh-known-hosts-edpm-deployment-t9tj8\" (UID: \"9f94d556-2c7c-42db-8c80-521d055ccc68\") " pod="openstack/ssh-known-hosts-edpm-deployment-t9tj8" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.264283 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9f94d556-2c7c-42db-8c80-521d055ccc68-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-t9tj8\" (UID: \"9f94d556-2c7c-42db-8c80-521d055ccc68\") " pod="openstack/ssh-known-hosts-edpm-deployment-t9tj8" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.264519 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f94d556-2c7c-42db-8c80-521d055ccc68-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-t9tj8\" (UID: \"9f94d556-2c7c-42db-8c80-521d055ccc68\") " pod="openstack/ssh-known-hosts-edpm-deployment-t9tj8" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.367388 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncwbn\" (UniqueName: \"kubernetes.io/projected/9f94d556-2c7c-42db-8c80-521d055ccc68-kube-api-access-ncwbn\") pod \"ssh-known-hosts-edpm-deployment-t9tj8\" (UID: \"9f94d556-2c7c-42db-8c80-521d055ccc68\") " pod="openstack/ssh-known-hosts-edpm-deployment-t9tj8" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.367464 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9f94d556-2c7c-42db-8c80-521d055ccc68-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-t9tj8\" (UID: \"9f94d556-2c7c-42db-8c80-521d055ccc68\") " pod="openstack/ssh-known-hosts-edpm-deployment-t9tj8" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.367642 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f94d556-2c7c-42db-8c80-521d055ccc68-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-t9tj8\" (UID: \"9f94d556-2c7c-42db-8c80-521d055ccc68\") " pod="openstack/ssh-known-hosts-edpm-deployment-t9tj8" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.371735 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f94d556-2c7c-42db-8c80-521d055ccc68-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-t9tj8\" (UID: \"9f94d556-2c7c-42db-8c80-521d055ccc68\") " pod="openstack/ssh-known-hosts-edpm-deployment-t9tj8" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.372033 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9f94d556-2c7c-42db-8c80-521d055ccc68-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-t9tj8\" (UID: \"9f94d556-2c7c-42db-8c80-521d055ccc68\") " pod="openstack/ssh-known-hosts-edpm-deployment-t9tj8" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.380802 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncwbn\" (UniqueName: \"kubernetes.io/projected/9f94d556-2c7c-42db-8c80-521d055ccc68-kube-api-access-ncwbn\") pod \"ssh-known-hosts-edpm-deployment-t9tj8\" (UID: \"9f94d556-2c7c-42db-8c80-521d055ccc68\") " pod="openstack/ssh-known-hosts-edpm-deployment-t9tj8" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.501254 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-t9tj8" Dec 05 10:52:53 crc kubenswrapper[4796]: I1205 10:52:53.955323 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-t9tj8"] Dec 05 10:52:54 crc kubenswrapper[4796]: I1205 10:52:54.095915 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-t9tj8" event={"ID":"9f94d556-2c7c-42db-8c80-521d055ccc68","Type":"ContainerStarted","Data":"7173d476a5148a8dd47c55897360cb715f92bbbb702278a7b90440ec50893137"} Dec 05 10:52:55 crc kubenswrapper[4796]: I1205 10:52:55.104717 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-t9tj8" event={"ID":"9f94d556-2c7c-42db-8c80-521d055ccc68","Type":"ContainerStarted","Data":"4f60c57cfb8ee56ddcae34a6e9898958664fe320a3af5450fb75d1d9a7386f4a"} Dec 05 10:52:55 crc kubenswrapper[4796]: I1205 10:52:55.123879 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-t9tj8" podStartSLOduration=1.590052827 podStartE2EDuration="2.123859997s" podCreationTimestamp="2025-12-05 10:52:53 +0000 UTC" firstStartedPulling="2025-12-05 10:52:53.962514341 +0000 UTC m=+1520.250619854" lastFinishedPulling="2025-12-05 10:52:54.49632151 +0000 UTC m=+1520.784427024" observedRunningTime="2025-12-05 10:52:55.118538827 +0000 UTC m=+1521.406644360" watchObservedRunningTime="2025-12-05 10:52:55.123859997 +0000 UTC m=+1521.411965509" Dec 05 10:53:00 crc kubenswrapper[4796]: I1205 10:53:00.144221 4796 generic.go:334] "Generic (PLEG): container finished" podID="9f94d556-2c7c-42db-8c80-521d055ccc68" containerID="4f60c57cfb8ee56ddcae34a6e9898958664fe320a3af5450fb75d1d9a7386f4a" exitCode=0 Dec 05 10:53:00 crc kubenswrapper[4796]: I1205 10:53:00.144313 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-t9tj8" event={"ID":"9f94d556-2c7c-42db-8c80-521d055ccc68","Type":"ContainerDied","Data":"4f60c57cfb8ee56ddcae34a6e9898958664fe320a3af5450fb75d1d9a7386f4a"} Dec 05 10:53:01 crc kubenswrapper[4796]: I1205 10:53:01.031819 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:53:01 crc kubenswrapper[4796]: E1205 10:53:01.032525 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:53:01 crc kubenswrapper[4796]: I1205 10:53:01.552426 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-t9tj8" Dec 05 10:53:01 crc kubenswrapper[4796]: I1205 10:53:01.689663 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncwbn\" (UniqueName: \"kubernetes.io/projected/9f94d556-2c7c-42db-8c80-521d055ccc68-kube-api-access-ncwbn\") pod \"9f94d556-2c7c-42db-8c80-521d055ccc68\" (UID: \"9f94d556-2c7c-42db-8c80-521d055ccc68\") " Dec 05 10:53:01 crc kubenswrapper[4796]: I1205 10:53:01.689813 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f94d556-2c7c-42db-8c80-521d055ccc68-ssh-key-openstack-edpm-ipam\") pod \"9f94d556-2c7c-42db-8c80-521d055ccc68\" (UID: \"9f94d556-2c7c-42db-8c80-521d055ccc68\") " Dec 05 10:53:01 crc kubenswrapper[4796]: I1205 10:53:01.689878 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9f94d556-2c7c-42db-8c80-521d055ccc68-inventory-0\") pod \"9f94d556-2c7c-42db-8c80-521d055ccc68\" (UID: \"9f94d556-2c7c-42db-8c80-521d055ccc68\") " Dec 05 10:53:01 crc kubenswrapper[4796]: I1205 10:53:01.696382 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f94d556-2c7c-42db-8c80-521d055ccc68-kube-api-access-ncwbn" (OuterVolumeSpecName: "kube-api-access-ncwbn") pod "9f94d556-2c7c-42db-8c80-521d055ccc68" (UID: "9f94d556-2c7c-42db-8c80-521d055ccc68"). InnerVolumeSpecName "kube-api-access-ncwbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:53:01 crc kubenswrapper[4796]: I1205 10:53:01.736701 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f94d556-2c7c-42db-8c80-521d055ccc68-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9f94d556-2c7c-42db-8c80-521d055ccc68" (UID: "9f94d556-2c7c-42db-8c80-521d055ccc68"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:53:01 crc kubenswrapper[4796]: I1205 10:53:01.736997 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f94d556-2c7c-42db-8c80-521d055ccc68-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "9f94d556-2c7c-42db-8c80-521d055ccc68" (UID: "9f94d556-2c7c-42db-8c80-521d055ccc68"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:53:01 crc kubenswrapper[4796]: I1205 10:53:01.792902 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncwbn\" (UniqueName: \"kubernetes.io/projected/9f94d556-2c7c-42db-8c80-521d055ccc68-kube-api-access-ncwbn\") on node \"crc\" DevicePath \"\"" Dec 05 10:53:01 crc kubenswrapper[4796]: I1205 10:53:01.793209 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9f94d556-2c7c-42db-8c80-521d055ccc68-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 05 10:53:01 crc kubenswrapper[4796]: I1205 10:53:01.793220 4796 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9f94d556-2c7c-42db-8c80-521d055ccc68-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 05 10:53:02 crc kubenswrapper[4796]: I1205 10:53:02.164840 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-t9tj8" event={"ID":"9f94d556-2c7c-42db-8c80-521d055ccc68","Type":"ContainerDied","Data":"7173d476a5148a8dd47c55897360cb715f92bbbb702278a7b90440ec50893137"} Dec 05 10:53:02 crc kubenswrapper[4796]: I1205 10:53:02.164887 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7173d476a5148a8dd47c55897360cb715f92bbbb702278a7b90440ec50893137" Dec 05 10:53:02 crc kubenswrapper[4796]: I1205 10:53:02.164926 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-t9tj8" Dec 05 10:53:02 crc kubenswrapper[4796]: I1205 10:53:02.219317 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-xssb4"] Dec 05 10:53:02 crc kubenswrapper[4796]: E1205 10:53:02.219641 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f94d556-2c7c-42db-8c80-521d055ccc68" containerName="ssh-known-hosts-edpm-deployment" Dec 05 10:53:02 crc kubenswrapper[4796]: I1205 10:53:02.219653 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f94d556-2c7c-42db-8c80-521d055ccc68" containerName="ssh-known-hosts-edpm-deployment" Dec 05 10:53:02 crc kubenswrapper[4796]: I1205 10:53:02.219840 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f94d556-2c7c-42db-8c80-521d055ccc68" containerName="ssh-known-hosts-edpm-deployment" Dec 05 10:53:02 crc kubenswrapper[4796]: I1205 10:53:02.220409 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xssb4" Dec 05 10:53:02 crc kubenswrapper[4796]: I1205 10:53:02.225198 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 10:53:02 crc kubenswrapper[4796]: I1205 10:53:02.225340 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 10:53:02 crc kubenswrapper[4796]: I1205 10:53:02.225469 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vmswc" Dec 05 10:53:02 crc kubenswrapper[4796]: I1205 10:53:02.225517 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-xssb4"] Dec 05 10:53:02 crc kubenswrapper[4796]: I1205 10:53:02.225579 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 10:53:02 crc kubenswrapper[4796]: I1205 10:53:02.303763 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wd4c\" (UniqueName: \"kubernetes.io/projected/ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f-kube-api-access-6wd4c\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xssb4\" (UID: \"ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xssb4" Dec 05 10:53:02 crc kubenswrapper[4796]: I1205 10:53:02.303848 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xssb4\" (UID: \"ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xssb4" Dec 05 10:53:02 crc kubenswrapper[4796]: I1205 10:53:02.303877 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xssb4\" (UID: \"ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xssb4" Dec 05 10:53:02 crc kubenswrapper[4796]: I1205 10:53:02.406416 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wd4c\" (UniqueName: \"kubernetes.io/projected/ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f-kube-api-access-6wd4c\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xssb4\" (UID: \"ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xssb4" Dec 05 10:53:02 crc kubenswrapper[4796]: I1205 10:53:02.406557 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xssb4\" (UID: \"ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xssb4" Dec 05 10:53:02 crc kubenswrapper[4796]: I1205 10:53:02.406596 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xssb4\" (UID: \"ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xssb4" Dec 05 10:53:02 crc kubenswrapper[4796]: I1205 10:53:02.412014 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xssb4\" (UID: \"ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xssb4" Dec 05 10:53:02 crc kubenswrapper[4796]: I1205 10:53:02.415956 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xssb4\" (UID: \"ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xssb4" Dec 05 10:53:02 crc kubenswrapper[4796]: I1205 10:53:02.420432 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wd4c\" (UniqueName: \"kubernetes.io/projected/ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f-kube-api-access-6wd4c\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xssb4\" (UID: \"ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xssb4" Dec 05 10:53:02 crc kubenswrapper[4796]: I1205 10:53:02.553418 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xssb4" Dec 05 10:53:03 crc kubenswrapper[4796]: I1205 10:53:03.007471 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-xssb4"] Dec 05 10:53:03 crc kubenswrapper[4796]: I1205 10:53:03.037288 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ptbpr"] Dec 05 10:53:03 crc kubenswrapper[4796]: I1205 10:53:03.042552 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-ptbpr"] Dec 05 10:53:03 crc kubenswrapper[4796]: I1205 10:53:03.172077 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xssb4" event={"ID":"ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f","Type":"ContainerStarted","Data":"2a0b0329897a151b92a354cb51ba59e8eb99ec83d3d1fd37c188fe973e9b7bab"} Dec 05 10:53:04 crc kubenswrapper[4796]: I1205 10:53:04.041720 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ae7419c-e111-4405-8ed0-90f518e557d8" path="/var/lib/kubelet/pods/9ae7419c-e111-4405-8ed0-90f518e557d8/volumes" Dec 05 10:53:04 crc kubenswrapper[4796]: I1205 10:53:04.183059 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xssb4" event={"ID":"ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f","Type":"ContainerStarted","Data":"0dd36f055e094518da4d05fa5191079bc80d7a7930aee48f0163f27479e6ee0f"} Dec 05 10:53:04 crc kubenswrapper[4796]: I1205 10:53:04.196570 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xssb4" podStartSLOduration=1.688018507 podStartE2EDuration="2.196552343s" podCreationTimestamp="2025-12-05 10:53:02 +0000 UTC" firstStartedPulling="2025-12-05 10:53:03.007094222 +0000 UTC m=+1529.295199734" lastFinishedPulling="2025-12-05 10:53:03.515628057 +0000 UTC m=+1529.803733570" observedRunningTime="2025-12-05 10:53:04.193118302 +0000 UTC m=+1530.481223815" watchObservedRunningTime="2025-12-05 10:53:04.196552343 +0000 UTC m=+1530.484657856" Dec 05 10:53:10 crc kubenswrapper[4796]: I1205 10:53:10.228712 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xssb4" event={"ID":"ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f","Type":"ContainerDied","Data":"0dd36f055e094518da4d05fa5191079bc80d7a7930aee48f0163f27479e6ee0f"} Dec 05 10:53:10 crc kubenswrapper[4796]: I1205 10:53:10.228696 4796 generic.go:334] "Generic (PLEG): container finished" podID="ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f" containerID="0dd36f055e094518da4d05fa5191079bc80d7a7930aee48f0163f27479e6ee0f" exitCode=0 Dec 05 10:53:11 crc kubenswrapper[4796]: I1205 10:53:11.553140 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xssb4" Dec 05 10:53:11 crc kubenswrapper[4796]: I1205 10:53:11.697298 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f-ssh-key\") pod \"ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f\" (UID: \"ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f\") " Dec 05 10:53:11 crc kubenswrapper[4796]: I1205 10:53:11.697391 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wd4c\" (UniqueName: \"kubernetes.io/projected/ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f-kube-api-access-6wd4c\") pod \"ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f\" (UID: \"ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f\") " Dec 05 10:53:11 crc kubenswrapper[4796]: I1205 10:53:11.697424 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f-inventory\") pod \"ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f\" (UID: \"ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f\") " Dec 05 10:53:11 crc kubenswrapper[4796]: I1205 10:53:11.701907 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f-kube-api-access-6wd4c" (OuterVolumeSpecName: "kube-api-access-6wd4c") pod "ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f" (UID: "ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f"). InnerVolumeSpecName "kube-api-access-6wd4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:53:11 crc kubenswrapper[4796]: I1205 10:53:11.719068 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f-inventory" (OuterVolumeSpecName: "inventory") pod "ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f" (UID: "ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:53:11 crc kubenswrapper[4796]: I1205 10:53:11.722164 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f" (UID: "ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:53:11 crc kubenswrapper[4796]: I1205 10:53:11.799299 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wd4c\" (UniqueName: \"kubernetes.io/projected/ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f-kube-api-access-6wd4c\") on node \"crc\" DevicePath \"\"" Dec 05 10:53:11 crc kubenswrapper[4796]: I1205 10:53:11.799326 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 10:53:11 crc kubenswrapper[4796]: I1205 10:53:11.799338 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 10:53:12 crc kubenswrapper[4796]: I1205 10:53:12.259801 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xssb4" event={"ID":"ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f","Type":"ContainerDied","Data":"2a0b0329897a151b92a354cb51ba59e8eb99ec83d3d1fd37c188fe973e9b7bab"} Dec 05 10:53:12 crc kubenswrapper[4796]: I1205 10:53:12.260090 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a0b0329897a151b92a354cb51ba59e8eb99ec83d3d1fd37c188fe973e9b7bab" Dec 05 10:53:12 crc kubenswrapper[4796]: I1205 10:53:12.259862 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xssb4" Dec 05 10:53:12 crc kubenswrapper[4796]: I1205 10:53:12.299901 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr"] Dec 05 10:53:12 crc kubenswrapper[4796]: E1205 10:53:12.300280 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 05 10:53:12 crc kubenswrapper[4796]: I1205 10:53:12.300299 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 05 10:53:12 crc kubenswrapper[4796]: I1205 10:53:12.300477 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 05 10:53:12 crc kubenswrapper[4796]: I1205 10:53:12.301085 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr" Dec 05 10:53:12 crc kubenswrapper[4796]: I1205 10:53:12.302980 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vmswc" Dec 05 10:53:12 crc kubenswrapper[4796]: I1205 10:53:12.303422 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 10:53:12 crc kubenswrapper[4796]: I1205 10:53:12.303590 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 10:53:12 crc kubenswrapper[4796]: I1205 10:53:12.303746 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 10:53:12 crc kubenswrapper[4796]: I1205 10:53:12.310100 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3328299-a078-40d9-90fd-94a0b4145ae5-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr\" (UID: \"b3328299-a078-40d9-90fd-94a0b4145ae5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr" Dec 05 10:53:12 crc kubenswrapper[4796]: I1205 10:53:12.310159 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9snqv\" (UniqueName: \"kubernetes.io/projected/b3328299-a078-40d9-90fd-94a0b4145ae5-kube-api-access-9snqv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr\" (UID: \"b3328299-a078-40d9-90fd-94a0b4145ae5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr" Dec 05 10:53:12 crc kubenswrapper[4796]: I1205 10:53:12.310281 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3328299-a078-40d9-90fd-94a0b4145ae5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr\" (UID: \"b3328299-a078-40d9-90fd-94a0b4145ae5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr" Dec 05 10:53:12 crc kubenswrapper[4796]: I1205 10:53:12.312257 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr"] Dec 05 10:53:12 crc kubenswrapper[4796]: I1205 10:53:12.412124 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3328299-a078-40d9-90fd-94a0b4145ae5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr\" (UID: \"b3328299-a078-40d9-90fd-94a0b4145ae5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr" Dec 05 10:53:12 crc kubenswrapper[4796]: I1205 10:53:12.412505 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3328299-a078-40d9-90fd-94a0b4145ae5-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr\" (UID: \"b3328299-a078-40d9-90fd-94a0b4145ae5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr" Dec 05 10:53:12 crc kubenswrapper[4796]: I1205 10:53:12.412635 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9snqv\" (UniqueName: \"kubernetes.io/projected/b3328299-a078-40d9-90fd-94a0b4145ae5-kube-api-access-9snqv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr\" (UID: \"b3328299-a078-40d9-90fd-94a0b4145ae5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr" Dec 05 10:53:12 crc kubenswrapper[4796]: I1205 10:53:12.417757 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3328299-a078-40d9-90fd-94a0b4145ae5-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr\" (UID: \"b3328299-a078-40d9-90fd-94a0b4145ae5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr" Dec 05 10:53:12 crc kubenswrapper[4796]: I1205 10:53:12.417771 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3328299-a078-40d9-90fd-94a0b4145ae5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr\" (UID: \"b3328299-a078-40d9-90fd-94a0b4145ae5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr" Dec 05 10:53:12 crc kubenswrapper[4796]: I1205 10:53:12.428079 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9snqv\" (UniqueName: \"kubernetes.io/projected/b3328299-a078-40d9-90fd-94a0b4145ae5-kube-api-access-9snqv\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr\" (UID: \"b3328299-a078-40d9-90fd-94a0b4145ae5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr" Dec 05 10:53:12 crc kubenswrapper[4796]: I1205 10:53:12.614125 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr" Dec 05 10:53:13 crc kubenswrapper[4796]: I1205 10:53:13.060373 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr"] Dec 05 10:53:13 crc kubenswrapper[4796]: I1205 10:53:13.268879 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr" event={"ID":"b3328299-a078-40d9-90fd-94a0b4145ae5","Type":"ContainerStarted","Data":"6c0a8ace231a8ccbf3b425752d42e456e177affedfa3fc07876951999c949782"} Dec 05 10:53:14 crc kubenswrapper[4796]: I1205 10:53:14.276849 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr" event={"ID":"b3328299-a078-40d9-90fd-94a0b4145ae5","Type":"ContainerStarted","Data":"f5e16a979112ed24c338b43aacddfe6ebbf0cbae3aff4d52852fe20bbc5893f7"} Dec 05 10:53:14 crc kubenswrapper[4796]: I1205 10:53:14.287427 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr" podStartSLOduration=1.769596331 podStartE2EDuration="2.287409765s" podCreationTimestamp="2025-12-05 10:53:12 +0000 UTC" firstStartedPulling="2025-12-05 10:53:13.071075796 +0000 UTC m=+1539.359181308" lastFinishedPulling="2025-12-05 10:53:13.588889229 +0000 UTC m=+1539.876994742" observedRunningTime="2025-12-05 10:53:14.286282124 +0000 UTC m=+1540.574387638" watchObservedRunningTime="2025-12-05 10:53:14.287409765 +0000 UTC m=+1540.575515279" Dec 05 10:53:15 crc kubenswrapper[4796]: I1205 10:53:15.031321 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:53:15 crc kubenswrapper[4796]: E1205 10:53:15.031574 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:53:21 crc kubenswrapper[4796]: I1205 10:53:21.338958 4796 generic.go:334] "Generic (PLEG): container finished" podID="b3328299-a078-40d9-90fd-94a0b4145ae5" containerID="f5e16a979112ed24c338b43aacddfe6ebbf0cbae3aff4d52852fe20bbc5893f7" exitCode=0 Dec 05 10:53:21 crc kubenswrapper[4796]: I1205 10:53:21.339047 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr" event={"ID":"b3328299-a078-40d9-90fd-94a0b4145ae5","Type":"ContainerDied","Data":"f5e16a979112ed24c338b43aacddfe6ebbf0cbae3aff4d52852fe20bbc5893f7"} Dec 05 10:53:22 crc kubenswrapper[4796]: I1205 10:53:22.041308 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-98wbd"] Dec 05 10:53:22 crc kubenswrapper[4796]: I1205 10:53:22.045873 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-98wbd"] Dec 05 10:53:22 crc kubenswrapper[4796]: I1205 10:53:22.687951 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr" Dec 05 10:53:22 crc kubenswrapper[4796]: I1205 10:53:22.826671 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3328299-a078-40d9-90fd-94a0b4145ae5-ssh-key\") pod \"b3328299-a078-40d9-90fd-94a0b4145ae5\" (UID: \"b3328299-a078-40d9-90fd-94a0b4145ae5\") " Dec 05 10:53:22 crc kubenswrapper[4796]: I1205 10:53:22.826905 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3328299-a078-40d9-90fd-94a0b4145ae5-inventory\") pod \"b3328299-a078-40d9-90fd-94a0b4145ae5\" (UID: \"b3328299-a078-40d9-90fd-94a0b4145ae5\") " Dec 05 10:53:22 crc kubenswrapper[4796]: I1205 10:53:22.826957 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9snqv\" (UniqueName: \"kubernetes.io/projected/b3328299-a078-40d9-90fd-94a0b4145ae5-kube-api-access-9snqv\") pod \"b3328299-a078-40d9-90fd-94a0b4145ae5\" (UID: \"b3328299-a078-40d9-90fd-94a0b4145ae5\") " Dec 05 10:53:22 crc kubenswrapper[4796]: I1205 10:53:22.836121 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3328299-a078-40d9-90fd-94a0b4145ae5-kube-api-access-9snqv" (OuterVolumeSpecName: "kube-api-access-9snqv") pod "b3328299-a078-40d9-90fd-94a0b4145ae5" (UID: "b3328299-a078-40d9-90fd-94a0b4145ae5"). InnerVolumeSpecName "kube-api-access-9snqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:53:22 crc kubenswrapper[4796]: I1205 10:53:22.852928 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3328299-a078-40d9-90fd-94a0b4145ae5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b3328299-a078-40d9-90fd-94a0b4145ae5" (UID: "b3328299-a078-40d9-90fd-94a0b4145ae5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:53:22 crc kubenswrapper[4796]: I1205 10:53:22.855837 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3328299-a078-40d9-90fd-94a0b4145ae5-inventory" (OuterVolumeSpecName: "inventory") pod "b3328299-a078-40d9-90fd-94a0b4145ae5" (UID: "b3328299-a078-40d9-90fd-94a0b4145ae5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:53:22 crc kubenswrapper[4796]: I1205 10:53:22.933286 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3328299-a078-40d9-90fd-94a0b4145ae5-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 10:53:22 crc kubenswrapper[4796]: I1205 10:53:22.933347 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9snqv\" (UniqueName: \"kubernetes.io/projected/b3328299-a078-40d9-90fd-94a0b4145ae5-kube-api-access-9snqv\") on node \"crc\" DevicePath \"\"" Dec 05 10:53:22 crc kubenswrapper[4796]: I1205 10:53:22.933363 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3328299-a078-40d9-90fd-94a0b4145ae5-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.023470 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-rkwvn"] Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.028834 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-rkwvn"] Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.362553 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr" event={"ID":"b3328299-a078-40d9-90fd-94a0b4145ae5","Type":"ContainerDied","Data":"6c0a8ace231a8ccbf3b425752d42e456e177affedfa3fc07876951999c949782"} Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.363082 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c0a8ace231a8ccbf3b425752d42e456e177affedfa3fc07876951999c949782" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.362669 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.448607 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp"] Dec 05 10:53:23 crc kubenswrapper[4796]: E1205 10:53:23.449039 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3328299-a078-40d9-90fd-94a0b4145ae5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.449067 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3328299-a078-40d9-90fd-94a0b4145ae5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.449306 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3328299-a078-40d9-90fd-94a0b4145ae5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.450030 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.453146 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.453311 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.453330 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.453365 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.453449 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.453513 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.456665 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.456873 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vmswc" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.461479 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp"] Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.548365 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.548501 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brkcp\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-kube-api-access-brkcp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.548563 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.548658 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.548713 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.548761 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.548852 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.548891 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.548922 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.549123 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.549193 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.549237 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.549270 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.549353 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.650671 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.650736 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.650770 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.650811 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.650836 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.650861 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.650895 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.650932 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.650958 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.650979 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.651010 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.651073 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.651098 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brkcp\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-kube-api-access-brkcp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.651139 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.655874 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.657300 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.659342 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.659380 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.659403 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.659556 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.659977 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.660013 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.660171 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.660240 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.660875 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.661117 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.669583 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brkcp\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-kube-api-access-brkcp\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.672436 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:23 crc kubenswrapper[4796]: I1205 10:53:23.766578 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:24 crc kubenswrapper[4796]: I1205 10:53:24.043317 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c470255-330c-47ec-91f7-a566792f753f" path="/var/lib/kubelet/pods/0c470255-330c-47ec-91f7-a566792f753f/volumes" Dec 05 10:53:24 crc kubenswrapper[4796]: I1205 10:53:24.043862 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="912d5bbd-238c-49b9-ace1-e201dead6822" path="/var/lib/kubelet/pods/912d5bbd-238c-49b9-ace1-e201dead6822/volumes" Dec 05 10:53:24 crc kubenswrapper[4796]: I1205 10:53:24.266491 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp"] Dec 05 10:53:24 crc kubenswrapper[4796]: I1205 10:53:24.370906 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" event={"ID":"c5d11e0e-4240-4769-8a8a-945f78970a6c","Type":"ContainerStarted","Data":"28cdda3725b3b0eaf3e17c2605aad8713c4c913415e55c35d2c7948b20c55dfb"} Dec 05 10:53:25 crc kubenswrapper[4796]: I1205 10:53:25.382769 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" event={"ID":"c5d11e0e-4240-4769-8a8a-945f78970a6c","Type":"ContainerStarted","Data":"d671f13b9d7b9b41686b974312f175b9a9dcccf6733238631256806e081a9675"} Dec 05 10:53:25 crc kubenswrapper[4796]: I1205 10:53:25.408379 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" podStartSLOduration=1.9211700189999998 podStartE2EDuration="2.408360906s" podCreationTimestamp="2025-12-05 10:53:23 +0000 UTC" firstStartedPulling="2025-12-05 10:53:24.271952341 +0000 UTC m=+1550.560057854" lastFinishedPulling="2025-12-05 10:53:24.759143217 +0000 UTC m=+1551.047248741" observedRunningTime="2025-12-05 10:53:25.40162299 +0000 UTC m=+1551.689728504" watchObservedRunningTime="2025-12-05 10:53:25.408360906 +0000 UTC m=+1551.696466420" Dec 05 10:53:26 crc kubenswrapper[4796]: I1205 10:53:26.031472 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:53:26 crc kubenswrapper[4796]: E1205 10:53:26.032115 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:53:37 crc kubenswrapper[4796]: I1205 10:53:37.255452 4796 scope.go:117] "RemoveContainer" containerID="9eb9b47e90f2a6bb10fdfcda2dd134226808b10dab1880334da62cca21e83c9b" Dec 05 10:53:37 crc kubenswrapper[4796]: I1205 10:53:37.281091 4796 scope.go:117] "RemoveContainer" containerID="dce665f6bbe16b4036f2c0f4c1773649d9f17af91474ea178f951a42c0d8a1bb" Dec 05 10:53:37 crc kubenswrapper[4796]: I1205 10:53:37.318680 4796 scope.go:117] "RemoveContainer" containerID="1c0aee1babc34988ff6c54c0d82d2abec5769b501fb1feb95bc73a7646900007" Dec 05 10:53:37 crc kubenswrapper[4796]: I1205 10:53:37.375772 4796 scope.go:117] "RemoveContainer" containerID="e479a14ed99eb8d50f49b02aa904852105d4ad6e766e5ed1d765881bf4f751e2" Dec 05 10:53:37 crc kubenswrapper[4796]: I1205 10:53:37.414799 4796 scope.go:117] "RemoveContainer" containerID="2e560f5cb836fd7323df6a132ee6cf2300f605427c9f76d527575ec557f06325" Dec 05 10:53:37 crc kubenswrapper[4796]: I1205 10:53:37.434261 4796 scope.go:117] "RemoveContainer" containerID="7cb0bdaa0e5a1d7891eadf4ac6b264c3d5e9f6d5a45484bb72fdb0b16552c8fd" Dec 05 10:53:38 crc kubenswrapper[4796]: I1205 10:53:38.032210 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:53:38 crc kubenswrapper[4796]: E1205 10:53:38.033769 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:53:51 crc kubenswrapper[4796]: I1205 10:53:51.647425 4796 generic.go:334] "Generic (PLEG): container finished" podID="c5d11e0e-4240-4769-8a8a-945f78970a6c" containerID="d671f13b9d7b9b41686b974312f175b9a9dcccf6733238631256806e081a9675" exitCode=0 Dec 05 10:53:51 crc kubenswrapper[4796]: I1205 10:53:51.647528 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" event={"ID":"c5d11e0e-4240-4769-8a8a-945f78970a6c","Type":"ContainerDied","Data":"d671f13b9d7b9b41686b974312f175b9a9dcccf6733238631256806e081a9675"} Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.001652 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.031213 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:53:53 crc kubenswrapper[4796]: E1205 10:53:53.031632 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.033889 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"c5d11e0e-4240-4769-8a8a-945f78970a6c\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.033943 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brkcp\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-kube-api-access-brkcp\") pod \"c5d11e0e-4240-4769-8a8a-945f78970a6c\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.034070 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-repo-setup-combined-ca-bundle\") pod \"c5d11e0e-4240-4769-8a8a-945f78970a6c\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.034118 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-nova-combined-ca-bundle\") pod \"c5d11e0e-4240-4769-8a8a-945f78970a6c\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.034143 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-inventory\") pod \"c5d11e0e-4240-4769-8a8a-945f78970a6c\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.034166 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-neutron-metadata-combined-ca-bundle\") pod \"c5d11e0e-4240-4769-8a8a-945f78970a6c\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.034204 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-telemetry-combined-ca-bundle\") pod \"c5d11e0e-4240-4769-8a8a-945f78970a6c\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.034233 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-bootstrap-combined-ca-bundle\") pod \"c5d11e0e-4240-4769-8a8a-945f78970a6c\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.034259 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-ssh-key\") pod \"c5d11e0e-4240-4769-8a8a-945f78970a6c\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.034280 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-libvirt-combined-ca-bundle\") pod \"c5d11e0e-4240-4769-8a8a-945f78970a6c\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.034307 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"c5d11e0e-4240-4769-8a8a-945f78970a6c\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.034343 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-ovn-combined-ca-bundle\") pod \"c5d11e0e-4240-4769-8a8a-945f78970a6c\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.034370 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"c5d11e0e-4240-4769-8a8a-945f78970a6c\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.034401 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-ovn-default-certs-0\") pod \"c5d11e0e-4240-4769-8a8a-945f78970a6c\" (UID: \"c5d11e0e-4240-4769-8a8a-945f78970a6c\") " Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.043423 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "c5d11e0e-4240-4769-8a8a-945f78970a6c" (UID: "c5d11e0e-4240-4769-8a8a-945f78970a6c"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.045117 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "c5d11e0e-4240-4769-8a8a-945f78970a6c" (UID: "c5d11e0e-4240-4769-8a8a-945f78970a6c"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.045530 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "c5d11e0e-4240-4769-8a8a-945f78970a6c" (UID: "c5d11e0e-4240-4769-8a8a-945f78970a6c"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.045834 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c5d11e0e-4240-4769-8a8a-945f78970a6c" (UID: "c5d11e0e-4240-4769-8a8a-945f78970a6c"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.047154 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "c5d11e0e-4240-4769-8a8a-945f78970a6c" (UID: "c5d11e0e-4240-4769-8a8a-945f78970a6c"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.048163 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "c5d11e0e-4240-4769-8a8a-945f78970a6c" (UID: "c5d11e0e-4240-4769-8a8a-945f78970a6c"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.050845 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "c5d11e0e-4240-4769-8a8a-945f78970a6c" (UID: "c5d11e0e-4240-4769-8a8a-945f78970a6c"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.051057 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "c5d11e0e-4240-4769-8a8a-945f78970a6c" (UID: "c5d11e0e-4240-4769-8a8a-945f78970a6c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.051239 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "c5d11e0e-4240-4769-8a8a-945f78970a6c" (UID: "c5d11e0e-4240-4769-8a8a-945f78970a6c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.054765 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-kube-api-access-brkcp" (OuterVolumeSpecName: "kube-api-access-brkcp") pod "c5d11e0e-4240-4769-8a8a-945f78970a6c" (UID: "c5d11e0e-4240-4769-8a8a-945f78970a6c"). InnerVolumeSpecName "kube-api-access-brkcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.054811 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "c5d11e0e-4240-4769-8a8a-945f78970a6c" (UID: "c5d11e0e-4240-4769-8a8a-945f78970a6c"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.054869 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "c5d11e0e-4240-4769-8a8a-945f78970a6c" (UID: "c5d11e0e-4240-4769-8a8a-945f78970a6c"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.064087 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-inventory" (OuterVolumeSpecName: "inventory") pod "c5d11e0e-4240-4769-8a8a-945f78970a6c" (UID: "c5d11e0e-4240-4769-8a8a-945f78970a6c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.065196 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c5d11e0e-4240-4769-8a8a-945f78970a6c" (UID: "c5d11e0e-4240-4769-8a8a-945f78970a6c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.136154 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.136184 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brkcp\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-kube-api-access-brkcp\") on node \"crc\" DevicePath \"\"" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.136199 4796 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.136215 4796 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.136227 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.136238 4796 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.136248 4796 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.136257 4796 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.136264 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.136272 4796 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.136281 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.136291 4796 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5d11e0e-4240-4769-8a8a-945f78970a6c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.136301 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.136311 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c5d11e0e-4240-4769-8a8a-945f78970a6c-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.675787 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" event={"ID":"c5d11e0e-4240-4769-8a8a-945f78970a6c","Type":"ContainerDied","Data":"28cdda3725b3b0eaf3e17c2605aad8713c4c913415e55c35d2c7948b20c55dfb"} Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.676215 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28cdda3725b3b0eaf3e17c2605aad8713c4c913415e55c35d2c7948b20c55dfb" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.675995 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.789658 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88"] Dec 05 10:53:53 crc kubenswrapper[4796]: E1205 10:53:53.790220 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5d11e0e-4240-4769-8a8a-945f78970a6c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.790244 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5d11e0e-4240-4769-8a8a-945f78970a6c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.790536 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5d11e0e-4240-4769-8a8a-945f78970a6c" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.791292 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.793358 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.793483 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.793512 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vmswc" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.793659 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.793981 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.797575 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88"] Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.853819 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/313cdd2e-bea3-40ac-aee2-b0452b059735-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbr88\" (UID: \"313cdd2e-bea3-40ac-aee2-b0452b059735\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.853947 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/313cdd2e-bea3-40ac-aee2-b0452b059735-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbr88\" (UID: \"313cdd2e-bea3-40ac-aee2-b0452b059735\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.853983 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/313cdd2e-bea3-40ac-aee2-b0452b059735-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbr88\" (UID: \"313cdd2e-bea3-40ac-aee2-b0452b059735\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.854026 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313cdd2e-bea3-40ac-aee2-b0452b059735-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbr88\" (UID: \"313cdd2e-bea3-40ac-aee2-b0452b059735\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.854057 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qgh4\" (UniqueName: \"kubernetes.io/projected/313cdd2e-bea3-40ac-aee2-b0452b059735-kube-api-access-8qgh4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbr88\" (UID: \"313cdd2e-bea3-40ac-aee2-b0452b059735\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.956031 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/313cdd2e-bea3-40ac-aee2-b0452b059735-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbr88\" (UID: \"313cdd2e-bea3-40ac-aee2-b0452b059735\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.956135 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/313cdd2e-bea3-40ac-aee2-b0452b059735-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbr88\" (UID: \"313cdd2e-bea3-40ac-aee2-b0452b059735\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.956171 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/313cdd2e-bea3-40ac-aee2-b0452b059735-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbr88\" (UID: \"313cdd2e-bea3-40ac-aee2-b0452b059735\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.956215 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313cdd2e-bea3-40ac-aee2-b0452b059735-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbr88\" (UID: \"313cdd2e-bea3-40ac-aee2-b0452b059735\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.956246 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qgh4\" (UniqueName: \"kubernetes.io/projected/313cdd2e-bea3-40ac-aee2-b0452b059735-kube-api-access-8qgh4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbr88\" (UID: \"313cdd2e-bea3-40ac-aee2-b0452b059735\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.957751 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/313cdd2e-bea3-40ac-aee2-b0452b059735-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbr88\" (UID: \"313cdd2e-bea3-40ac-aee2-b0452b059735\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.961109 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313cdd2e-bea3-40ac-aee2-b0452b059735-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbr88\" (UID: \"313cdd2e-bea3-40ac-aee2-b0452b059735\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.961898 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/313cdd2e-bea3-40ac-aee2-b0452b059735-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbr88\" (UID: \"313cdd2e-bea3-40ac-aee2-b0452b059735\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.962600 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/313cdd2e-bea3-40ac-aee2-b0452b059735-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbr88\" (UID: \"313cdd2e-bea3-40ac-aee2-b0452b059735\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" Dec 05 10:53:53 crc kubenswrapper[4796]: I1205 10:53:53.970278 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qgh4\" (UniqueName: \"kubernetes.io/projected/313cdd2e-bea3-40ac-aee2-b0452b059735-kube-api-access-8qgh4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbr88\" (UID: \"313cdd2e-bea3-40ac-aee2-b0452b059735\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" Dec 05 10:53:54 crc kubenswrapper[4796]: I1205 10:53:54.104232 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" Dec 05 10:53:54 crc kubenswrapper[4796]: I1205 10:53:54.581418 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88"] Dec 05 10:53:54 crc kubenswrapper[4796]: I1205 10:53:54.688661 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" event={"ID":"313cdd2e-bea3-40ac-aee2-b0452b059735","Type":"ContainerStarted","Data":"25683a642389325d9fecc6a5a4b85edffe9ca38417cc413aef0715671ccecead"} Dec 05 10:53:55 crc kubenswrapper[4796]: I1205 10:53:55.700637 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" event={"ID":"313cdd2e-bea3-40ac-aee2-b0452b059735","Type":"ContainerStarted","Data":"6c30a6840c3e9c9a12de58351d2bb9106c708fad9c844859cf21db1cfcd4b66e"} Dec 05 10:53:55 crc kubenswrapper[4796]: I1205 10:53:55.719543 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" podStartSLOduration=2.153987451 podStartE2EDuration="2.71952414s" podCreationTimestamp="2025-12-05 10:53:53 +0000 UTC" firstStartedPulling="2025-12-05 10:53:54.58849248 +0000 UTC m=+1580.876597994" lastFinishedPulling="2025-12-05 10:53:55.15402917 +0000 UTC m=+1581.442134683" observedRunningTime="2025-12-05 10:53:55.718230307 +0000 UTC m=+1582.006335820" watchObservedRunningTime="2025-12-05 10:53:55.71952414 +0000 UTC m=+1582.007629654" Dec 05 10:54:06 crc kubenswrapper[4796]: I1205 10:54:06.031321 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:54:06 crc kubenswrapper[4796]: E1205 10:54:06.032107 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:54:06 crc kubenswrapper[4796]: I1205 10:54:06.043617 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-lz7nl"] Dec 05 10:54:06 crc kubenswrapper[4796]: I1205 10:54:06.049425 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-lz7nl"] Dec 05 10:54:08 crc kubenswrapper[4796]: I1205 10:54:08.041254 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a07ea2d4-a394-444a-a8d8-14970378437e" path="/var/lib/kubelet/pods/a07ea2d4-a394-444a-a8d8-14970378437e/volumes" Dec 05 10:54:18 crc kubenswrapper[4796]: I1205 10:54:18.032120 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:54:18 crc kubenswrapper[4796]: E1205 10:54:18.033031 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:54:31 crc kubenswrapper[4796]: I1205 10:54:31.031557 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:54:31 crc kubenswrapper[4796]: E1205 10:54:31.032360 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:54:37 crc kubenswrapper[4796]: I1205 10:54:37.599505 4796 scope.go:117] "RemoveContainer" containerID="5d1b1fc3950f3d7b05037e5026958b35a39961d9db25f552ae7916169b027704" Dec 05 10:54:42 crc kubenswrapper[4796]: I1205 10:54:42.114030 4796 generic.go:334] "Generic (PLEG): container finished" podID="313cdd2e-bea3-40ac-aee2-b0452b059735" containerID="6c30a6840c3e9c9a12de58351d2bb9106c708fad9c844859cf21db1cfcd4b66e" exitCode=0 Dec 05 10:54:42 crc kubenswrapper[4796]: I1205 10:54:42.114111 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" event={"ID":"313cdd2e-bea3-40ac-aee2-b0452b059735","Type":"ContainerDied","Data":"6c30a6840c3e9c9a12de58351d2bb9106c708fad9c844859cf21db1cfcd4b66e"} Dec 05 10:54:43 crc kubenswrapper[4796]: I1205 10:54:43.488154 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" Dec 05 10:54:43 crc kubenswrapper[4796]: I1205 10:54:43.626012 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313cdd2e-bea3-40ac-aee2-b0452b059735-ovn-combined-ca-bundle\") pod \"313cdd2e-bea3-40ac-aee2-b0452b059735\" (UID: \"313cdd2e-bea3-40ac-aee2-b0452b059735\") " Dec 05 10:54:43 crc kubenswrapper[4796]: I1205 10:54:43.626226 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qgh4\" (UniqueName: \"kubernetes.io/projected/313cdd2e-bea3-40ac-aee2-b0452b059735-kube-api-access-8qgh4\") pod \"313cdd2e-bea3-40ac-aee2-b0452b059735\" (UID: \"313cdd2e-bea3-40ac-aee2-b0452b059735\") " Dec 05 10:54:43 crc kubenswrapper[4796]: I1205 10:54:43.626422 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/313cdd2e-bea3-40ac-aee2-b0452b059735-inventory\") pod \"313cdd2e-bea3-40ac-aee2-b0452b059735\" (UID: \"313cdd2e-bea3-40ac-aee2-b0452b059735\") " Dec 05 10:54:43 crc kubenswrapper[4796]: I1205 10:54:43.626453 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/313cdd2e-bea3-40ac-aee2-b0452b059735-ovncontroller-config-0\") pod \"313cdd2e-bea3-40ac-aee2-b0452b059735\" (UID: \"313cdd2e-bea3-40ac-aee2-b0452b059735\") " Dec 05 10:54:43 crc kubenswrapper[4796]: I1205 10:54:43.626482 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/313cdd2e-bea3-40ac-aee2-b0452b059735-ssh-key\") pod \"313cdd2e-bea3-40ac-aee2-b0452b059735\" (UID: \"313cdd2e-bea3-40ac-aee2-b0452b059735\") " Dec 05 10:54:43 crc kubenswrapper[4796]: I1205 10:54:43.633085 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313cdd2e-bea3-40ac-aee2-b0452b059735-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "313cdd2e-bea3-40ac-aee2-b0452b059735" (UID: "313cdd2e-bea3-40ac-aee2-b0452b059735"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:54:43 crc kubenswrapper[4796]: I1205 10:54:43.634943 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/313cdd2e-bea3-40ac-aee2-b0452b059735-kube-api-access-8qgh4" (OuterVolumeSpecName: "kube-api-access-8qgh4") pod "313cdd2e-bea3-40ac-aee2-b0452b059735" (UID: "313cdd2e-bea3-40ac-aee2-b0452b059735"). InnerVolumeSpecName "kube-api-access-8qgh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:54:43 crc kubenswrapper[4796]: I1205 10:54:43.649982 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/313cdd2e-bea3-40ac-aee2-b0452b059735-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "313cdd2e-bea3-40ac-aee2-b0452b059735" (UID: "313cdd2e-bea3-40ac-aee2-b0452b059735"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 10:54:43 crc kubenswrapper[4796]: I1205 10:54:43.660094 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313cdd2e-bea3-40ac-aee2-b0452b059735-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "313cdd2e-bea3-40ac-aee2-b0452b059735" (UID: "313cdd2e-bea3-40ac-aee2-b0452b059735"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:54:43 crc kubenswrapper[4796]: I1205 10:54:43.660955 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313cdd2e-bea3-40ac-aee2-b0452b059735-inventory" (OuterVolumeSpecName: "inventory") pod "313cdd2e-bea3-40ac-aee2-b0452b059735" (UID: "313cdd2e-bea3-40ac-aee2-b0452b059735"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:54:43 crc kubenswrapper[4796]: I1205 10:54:43.728909 4796 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/313cdd2e-bea3-40ac-aee2-b0452b059735-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:54:43 crc kubenswrapper[4796]: I1205 10:54:43.728943 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qgh4\" (UniqueName: \"kubernetes.io/projected/313cdd2e-bea3-40ac-aee2-b0452b059735-kube-api-access-8qgh4\") on node \"crc\" DevicePath \"\"" Dec 05 10:54:43 crc kubenswrapper[4796]: I1205 10:54:43.728956 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/313cdd2e-bea3-40ac-aee2-b0452b059735-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 10:54:43 crc kubenswrapper[4796]: I1205 10:54:43.728966 4796 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/313cdd2e-bea3-40ac-aee2-b0452b059735-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 10:54:43 crc kubenswrapper[4796]: I1205 10:54:43.728973 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/313cdd2e-bea3-40ac-aee2-b0452b059735-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.133448 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" event={"ID":"313cdd2e-bea3-40ac-aee2-b0452b059735","Type":"ContainerDied","Data":"25683a642389325d9fecc6a5a4b85edffe9ca38417cc413aef0715671ccecead"} Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.133742 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25683a642389325d9fecc6a5a4b85edffe9ca38417cc413aef0715671ccecead" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.133508 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbr88" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.206425 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk"] Dec 05 10:54:44 crc kubenswrapper[4796]: E1205 10:54:44.206854 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="313cdd2e-bea3-40ac-aee2-b0452b059735" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.206878 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="313cdd2e-bea3-40ac-aee2-b0452b059735" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.207089 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="313cdd2e-bea3-40ac-aee2-b0452b059735" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.207733 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.211154 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.211353 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.211389 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.211478 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vmswc" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.211650 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.211704 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.216387 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk"] Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.237808 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.237927 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.237957 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.238046 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.238077 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr59q\" (UniqueName: \"kubernetes.io/projected/692ab668-84d9-4673-8601-c09b4025b5fe-kube-api-access-qr59q\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.238097 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.339992 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.340092 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr59q\" (UniqueName: \"kubernetes.io/projected/692ab668-84d9-4673-8601-c09b4025b5fe-kube-api-access-qr59q\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.340633 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.340733 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.341567 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.341612 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.344906 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.344943 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.345367 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.345979 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.346872 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.356459 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr59q\" (UniqueName: \"kubernetes.io/projected/692ab668-84d9-4673-8601-c09b4025b5fe-kube-api-access-qr59q\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" Dec 05 10:54:44 crc kubenswrapper[4796]: I1205 10:54:44.523488 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" Dec 05 10:54:45 crc kubenswrapper[4796]: I1205 10:54:45.003149 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk"] Dec 05 10:54:45 crc kubenswrapper[4796]: I1205 10:54:45.006630 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 10:54:45 crc kubenswrapper[4796]: I1205 10:54:45.143609 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" event={"ID":"692ab668-84d9-4673-8601-c09b4025b5fe","Type":"ContainerStarted","Data":"60219d70b54b1ca780e5973a8e37d3488d08e903664d164c3cef2d0e57a7038c"} Dec 05 10:54:46 crc kubenswrapper[4796]: I1205 10:54:46.031621 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:54:46 crc kubenswrapper[4796]: E1205 10:54:46.032337 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:54:46 crc kubenswrapper[4796]: I1205 10:54:46.165478 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" event={"ID":"692ab668-84d9-4673-8601-c09b4025b5fe","Type":"ContainerStarted","Data":"3cceca840f6440e369d782e60e2aa2dfa8ab90a08fc45a59ee99f6b9d030f21c"} Dec 05 10:54:46 crc kubenswrapper[4796]: I1205 10:54:46.185877 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" podStartSLOduration=1.692250351 podStartE2EDuration="2.185846026s" podCreationTimestamp="2025-12-05 10:54:44 +0000 UTC" firstStartedPulling="2025-12-05 10:54:45.00633852 +0000 UTC m=+1631.294444034" lastFinishedPulling="2025-12-05 10:54:45.499934195 +0000 UTC m=+1631.788039709" observedRunningTime="2025-12-05 10:54:46.182478152 +0000 UTC m=+1632.470583675" watchObservedRunningTime="2025-12-05 10:54:46.185846026 +0000 UTC m=+1632.473951539" Dec 05 10:54:59 crc kubenswrapper[4796]: I1205 10:54:59.030646 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:54:59 crc kubenswrapper[4796]: E1205 10:54:59.031509 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:55:14 crc kubenswrapper[4796]: I1205 10:55:14.037964 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:55:14 crc kubenswrapper[4796]: E1205 10:55:14.039088 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:55:21 crc kubenswrapper[4796]: I1205 10:55:21.490216 4796 generic.go:334] "Generic (PLEG): container finished" podID="692ab668-84d9-4673-8601-c09b4025b5fe" containerID="3cceca840f6440e369d782e60e2aa2dfa8ab90a08fc45a59ee99f6b9d030f21c" exitCode=0 Dec 05 10:55:21 crc kubenswrapper[4796]: I1205 10:55:21.490298 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" event={"ID":"692ab668-84d9-4673-8601-c09b4025b5fe","Type":"ContainerDied","Data":"3cceca840f6440e369d782e60e2aa2dfa8ab90a08fc45a59ee99f6b9d030f21c"} Dec 05 10:55:22 crc kubenswrapper[4796]: I1205 10:55:22.833224 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" Dec 05 10:55:22 crc kubenswrapper[4796]: I1205 10:55:22.976236 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-neutron-ovn-metadata-agent-neutron-config-0\") pod \"692ab668-84d9-4673-8601-c09b4025b5fe\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " Dec 05 10:55:22 crc kubenswrapper[4796]: I1205 10:55:22.976409 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-ssh-key\") pod \"692ab668-84d9-4673-8601-c09b4025b5fe\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " Dec 05 10:55:22 crc kubenswrapper[4796]: I1205 10:55:22.976490 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-neutron-metadata-combined-ca-bundle\") pod \"692ab668-84d9-4673-8601-c09b4025b5fe\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " Dec 05 10:55:22 crc kubenswrapper[4796]: I1205 10:55:22.976585 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr59q\" (UniqueName: \"kubernetes.io/projected/692ab668-84d9-4673-8601-c09b4025b5fe-kube-api-access-qr59q\") pod \"692ab668-84d9-4673-8601-c09b4025b5fe\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " Dec 05 10:55:22 crc kubenswrapper[4796]: I1205 10:55:22.976699 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-nova-metadata-neutron-config-0\") pod \"692ab668-84d9-4673-8601-c09b4025b5fe\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " Dec 05 10:55:22 crc kubenswrapper[4796]: I1205 10:55:22.976727 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-inventory\") pod \"692ab668-84d9-4673-8601-c09b4025b5fe\" (UID: \"692ab668-84d9-4673-8601-c09b4025b5fe\") " Dec 05 10:55:22 crc kubenswrapper[4796]: I1205 10:55:22.983423 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "692ab668-84d9-4673-8601-c09b4025b5fe" (UID: "692ab668-84d9-4673-8601-c09b4025b5fe"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:55:22 crc kubenswrapper[4796]: I1205 10:55:22.983645 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/692ab668-84d9-4673-8601-c09b4025b5fe-kube-api-access-qr59q" (OuterVolumeSpecName: "kube-api-access-qr59q") pod "692ab668-84d9-4673-8601-c09b4025b5fe" (UID: "692ab668-84d9-4673-8601-c09b4025b5fe"). InnerVolumeSpecName "kube-api-access-qr59q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.004945 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "692ab668-84d9-4673-8601-c09b4025b5fe" (UID: "692ab668-84d9-4673-8601-c09b4025b5fe"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.005736 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "692ab668-84d9-4673-8601-c09b4025b5fe" (UID: "692ab668-84d9-4673-8601-c09b4025b5fe"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.006726 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "692ab668-84d9-4673-8601-c09b4025b5fe" (UID: "692ab668-84d9-4673-8601-c09b4025b5fe"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.008753 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-inventory" (OuterVolumeSpecName: "inventory") pod "692ab668-84d9-4673-8601-c09b4025b5fe" (UID: "692ab668-84d9-4673-8601-c09b4025b5fe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.079874 4796 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.079903 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.079918 4796 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.079931 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.079940 4796 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/692ab668-84d9-4673-8601-c09b4025b5fe-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.079951 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qr59q\" (UniqueName: \"kubernetes.io/projected/692ab668-84d9-4673-8601-c09b4025b5fe-kube-api-access-qr59q\") on node \"crc\" DevicePath \"\"" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.551646 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" event={"ID":"692ab668-84d9-4673-8601-c09b4025b5fe","Type":"ContainerDied","Data":"60219d70b54b1ca780e5973a8e37d3488d08e903664d164c3cef2d0e57a7038c"} Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.551717 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60219d70b54b1ca780e5973a8e37d3488d08e903664d164c3cef2d0e57a7038c" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.551799 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.607122 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8"] Dec 05 10:55:23 crc kubenswrapper[4796]: E1205 10:55:23.607562 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692ab668-84d9-4673-8601-c09b4025b5fe" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.607581 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="692ab668-84d9-4673-8601-c09b4025b5fe" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.607789 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="692ab668-84d9-4673-8601-c09b4025b5fe" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.608469 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.611408 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.614229 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8"] Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.614344 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.614770 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.614779 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.614941 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vmswc" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.796742 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8\" (UID: \"0aba1742-7328-48a0-b9f5-af4c66636de3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.796810 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8\" (UID: \"0aba1742-7328-48a0-b9f5-af4c66636de3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.796896 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8\" (UID: \"0aba1742-7328-48a0-b9f5-af4c66636de3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.796942 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t88c\" (UniqueName: \"kubernetes.io/projected/0aba1742-7328-48a0-b9f5-af4c66636de3-kube-api-access-7t88c\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8\" (UID: \"0aba1742-7328-48a0-b9f5-af4c66636de3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.796969 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8\" (UID: \"0aba1742-7328-48a0-b9f5-af4c66636de3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.899993 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8\" (UID: \"0aba1742-7328-48a0-b9f5-af4c66636de3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.900072 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t88c\" (UniqueName: \"kubernetes.io/projected/0aba1742-7328-48a0-b9f5-af4c66636de3-kube-api-access-7t88c\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8\" (UID: \"0aba1742-7328-48a0-b9f5-af4c66636de3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.900115 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8\" (UID: \"0aba1742-7328-48a0-b9f5-af4c66636de3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.900253 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8\" (UID: \"0aba1742-7328-48a0-b9f5-af4c66636de3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.900297 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8\" (UID: \"0aba1742-7328-48a0-b9f5-af4c66636de3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.906233 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8\" (UID: \"0aba1742-7328-48a0-b9f5-af4c66636de3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.906238 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8\" (UID: \"0aba1742-7328-48a0-b9f5-af4c66636de3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.906768 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8\" (UID: \"0aba1742-7328-48a0-b9f5-af4c66636de3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.910795 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8\" (UID: \"0aba1742-7328-48a0-b9f5-af4c66636de3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.916490 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t88c\" (UniqueName: \"kubernetes.io/projected/0aba1742-7328-48a0-b9f5-af4c66636de3-kube-api-access-7t88c\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8\" (UID: \"0aba1742-7328-48a0-b9f5-af4c66636de3\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" Dec 05 10:55:23 crc kubenswrapper[4796]: I1205 10:55:23.925750 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" Dec 05 10:55:24 crc kubenswrapper[4796]: I1205 10:55:24.382670 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8"] Dec 05 10:55:24 crc kubenswrapper[4796]: I1205 10:55:24.563492 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" event={"ID":"0aba1742-7328-48a0-b9f5-af4c66636de3","Type":"ContainerStarted","Data":"32031514d9a4bed75ba7af3f24161de9275b4d2833eda669091e48dfa7e08ec5"} Dec 05 10:55:25 crc kubenswrapper[4796]: I1205 10:55:25.574695 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" event={"ID":"0aba1742-7328-48a0-b9f5-af4c66636de3","Type":"ContainerStarted","Data":"e4e253159167560f696d9a7976fe7a0e6f95ec0b41176728d1ecc10873b3ec8b"} Dec 05 10:55:25 crc kubenswrapper[4796]: I1205 10:55:25.600282 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" podStartSLOduration=2.117545625 podStartE2EDuration="2.600262119s" podCreationTimestamp="2025-12-05 10:55:23 +0000 UTC" firstStartedPulling="2025-12-05 10:55:24.390284036 +0000 UTC m=+1670.678389549" lastFinishedPulling="2025-12-05 10:55:24.87300053 +0000 UTC m=+1671.161106043" observedRunningTime="2025-12-05 10:55:25.591744841 +0000 UTC m=+1671.879850353" watchObservedRunningTime="2025-12-05 10:55:25.600262119 +0000 UTC m=+1671.888367633" Dec 05 10:55:29 crc kubenswrapper[4796]: I1205 10:55:29.031137 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:55:29 crc kubenswrapper[4796]: E1205 10:55:29.031817 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:55:42 crc kubenswrapper[4796]: I1205 10:55:42.031219 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:55:42 crc kubenswrapper[4796]: E1205 10:55:42.031922 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:55:56 crc kubenswrapper[4796]: I1205 10:55:56.031876 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:55:56 crc kubenswrapper[4796]: E1205 10:55:56.032551 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:56:09 crc kubenswrapper[4796]: I1205 10:56:09.031014 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:56:09 crc kubenswrapper[4796]: E1205 10:56:09.031816 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:56:20 crc kubenswrapper[4796]: I1205 10:56:20.030963 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:56:20 crc kubenswrapper[4796]: E1205 10:56:20.031936 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:56:33 crc kubenswrapper[4796]: I1205 10:56:33.031626 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:56:33 crc kubenswrapper[4796]: E1205 10:56:33.033841 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:56:48 crc kubenswrapper[4796]: I1205 10:56:48.031935 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:56:48 crc kubenswrapper[4796]: E1205 10:56:48.032998 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:57:03 crc kubenswrapper[4796]: I1205 10:57:03.031800 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:57:03 crc kubenswrapper[4796]: E1205 10:57:03.032806 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:57:16 crc kubenswrapper[4796]: I1205 10:57:16.032380 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:57:16 crc kubenswrapper[4796]: E1205 10:57:16.033459 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:57:29 crc kubenswrapper[4796]: I1205 10:57:29.031282 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:57:29 crc kubenswrapper[4796]: E1205 10:57:29.034002 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 10:57:42 crc kubenswrapper[4796]: I1205 10:57:42.031768 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 10:57:42 crc kubenswrapper[4796]: I1205 10:57:42.756713 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerStarted","Data":"dcb01733a1303955cc7132d26ec26ddb1901af7c36f36800bac1df8150042858"} Dec 05 10:58:39 crc kubenswrapper[4796]: I1205 10:58:39.294192 4796 generic.go:334] "Generic (PLEG): container finished" podID="0aba1742-7328-48a0-b9f5-af4c66636de3" containerID="e4e253159167560f696d9a7976fe7a0e6f95ec0b41176728d1ecc10873b3ec8b" exitCode=0 Dec 05 10:58:39 crc kubenswrapper[4796]: I1205 10:58:39.294304 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" event={"ID":"0aba1742-7328-48a0-b9f5-af4c66636de3","Type":"ContainerDied","Data":"e4e253159167560f696d9a7976fe7a0e6f95ec0b41176728d1ecc10873b3ec8b"} Dec 05 10:58:40 crc kubenswrapper[4796]: I1205 10:58:40.667098 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" Dec 05 10:58:40 crc kubenswrapper[4796]: I1205 10:58:40.806518 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-ssh-key\") pod \"0aba1742-7328-48a0-b9f5-af4c66636de3\" (UID: \"0aba1742-7328-48a0-b9f5-af4c66636de3\") " Dec 05 10:58:40 crc kubenswrapper[4796]: I1205 10:58:40.806861 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-inventory\") pod \"0aba1742-7328-48a0-b9f5-af4c66636de3\" (UID: \"0aba1742-7328-48a0-b9f5-af4c66636de3\") " Dec 05 10:58:40 crc kubenswrapper[4796]: I1205 10:58:40.806911 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-libvirt-combined-ca-bundle\") pod \"0aba1742-7328-48a0-b9f5-af4c66636de3\" (UID: \"0aba1742-7328-48a0-b9f5-af4c66636de3\") " Dec 05 10:58:40 crc kubenswrapper[4796]: I1205 10:58:40.806960 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-libvirt-secret-0\") pod \"0aba1742-7328-48a0-b9f5-af4c66636de3\" (UID: \"0aba1742-7328-48a0-b9f5-af4c66636de3\") " Dec 05 10:58:40 crc kubenswrapper[4796]: I1205 10:58:40.807000 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t88c\" (UniqueName: \"kubernetes.io/projected/0aba1742-7328-48a0-b9f5-af4c66636de3-kube-api-access-7t88c\") pod \"0aba1742-7328-48a0-b9f5-af4c66636de3\" (UID: \"0aba1742-7328-48a0-b9f5-af4c66636de3\") " Dec 05 10:58:40 crc kubenswrapper[4796]: I1205 10:58:40.814130 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aba1742-7328-48a0-b9f5-af4c66636de3-kube-api-access-7t88c" (OuterVolumeSpecName: "kube-api-access-7t88c") pod "0aba1742-7328-48a0-b9f5-af4c66636de3" (UID: "0aba1742-7328-48a0-b9f5-af4c66636de3"). InnerVolumeSpecName "kube-api-access-7t88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 10:58:40 crc kubenswrapper[4796]: I1205 10:58:40.814243 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "0aba1742-7328-48a0-b9f5-af4c66636de3" (UID: "0aba1742-7328-48a0-b9f5-af4c66636de3"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:58:40 crc kubenswrapper[4796]: I1205 10:58:40.832148 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0aba1742-7328-48a0-b9f5-af4c66636de3" (UID: "0aba1742-7328-48a0-b9f5-af4c66636de3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:58:40 crc kubenswrapper[4796]: I1205 10:58:40.837571 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-inventory" (OuterVolumeSpecName: "inventory") pod "0aba1742-7328-48a0-b9f5-af4c66636de3" (UID: "0aba1742-7328-48a0-b9f5-af4c66636de3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:58:40 crc kubenswrapper[4796]: I1205 10:58:40.838316 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "0aba1742-7328-48a0-b9f5-af4c66636de3" (UID: "0aba1742-7328-48a0-b9f5-af4c66636de3"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 10:58:40 crc kubenswrapper[4796]: I1205 10:58:40.910049 4796 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 05 10:58:40 crc kubenswrapper[4796]: I1205 10:58:40.910078 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t88c\" (UniqueName: \"kubernetes.io/projected/0aba1742-7328-48a0-b9f5-af4c66636de3-kube-api-access-7t88c\") on node \"crc\" DevicePath \"\"" Dec 05 10:58:40 crc kubenswrapper[4796]: I1205 10:58:40.910092 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 10:58:40 crc kubenswrapper[4796]: I1205 10:58:40.910104 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 10:58:40 crc kubenswrapper[4796]: I1205 10:58:40.910116 4796 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0aba1742-7328-48a0-b9f5-af4c66636de3-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.320672 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" event={"ID":"0aba1742-7328-48a0-b9f5-af4c66636de3","Type":"ContainerDied","Data":"32031514d9a4bed75ba7af3f24161de9275b4d2833eda669091e48dfa7e08ec5"} Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.320756 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32031514d9a4bed75ba7af3f24161de9275b4d2833eda669091e48dfa7e08ec5" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.320772 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.400649 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx"] Dec 05 10:58:41 crc kubenswrapper[4796]: E1205 10:58:41.401392 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aba1742-7328-48a0-b9f5-af4c66636de3" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.401469 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aba1742-7328-48a0-b9f5-af4c66636de3" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.401791 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aba1742-7328-48a0-b9f5-af4c66636de3" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.402656 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.404282 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vmswc" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.405425 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.405600 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.406038 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.406174 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.406404 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.406524 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.407169 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx"] Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.422620 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.422675 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.422709 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqw5d\" (UniqueName: \"kubernetes.io/projected/43ae5283-caa9-4308-b825-3c937081341c-kube-api-access-mqw5d\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.422768 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/43ae5283-caa9-4308-b825-3c937081341c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.422808 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.422844 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.422864 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.422915 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.422947 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.524220 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/43ae5283-caa9-4308-b825-3c937081341c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.524276 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.524312 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.524331 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.524389 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.524410 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.524433 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.524451 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.524468 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqw5d\" (UniqueName: \"kubernetes.io/projected/43ae5283-caa9-4308-b825-3c937081341c-kube-api-access-mqw5d\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.525437 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/43ae5283-caa9-4308-b825-3c937081341c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.528010 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.528403 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.528580 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.528978 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.529020 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.529345 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.529415 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.539310 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqw5d\" (UniqueName: \"kubernetes.io/projected/43ae5283-caa9-4308-b825-3c937081341c-kube-api-access-mqw5d\") pod \"nova-edpm-deployment-openstack-edpm-ipam-47tqx\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:41 crc kubenswrapper[4796]: I1205 10:58:41.718067 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 10:58:42 crc kubenswrapper[4796]: I1205 10:58:42.205878 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx"] Dec 05 10:58:42 crc kubenswrapper[4796]: I1205 10:58:42.331178 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" event={"ID":"43ae5283-caa9-4308-b825-3c937081341c","Type":"ContainerStarted","Data":"53bb11da50b12c4bda6c0f5ffa939a1706f9b3de6aad6286f39867e85eac3ff9"} Dec 05 10:58:43 crc kubenswrapper[4796]: I1205 10:58:43.341229 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" event={"ID":"43ae5283-caa9-4308-b825-3c937081341c","Type":"ContainerStarted","Data":"4b1a7f8ecf4831972763c354b5ab669628dc94a4dd749f2dbddeb5d5a46c6e8c"} Dec 05 10:58:43 crc kubenswrapper[4796]: I1205 10:58:43.369194 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" podStartSLOduration=1.676414648 podStartE2EDuration="2.369174107s" podCreationTimestamp="2025-12-05 10:58:41 +0000 UTC" firstStartedPulling="2025-12-05 10:58:42.221532403 +0000 UTC m=+1868.509637916" lastFinishedPulling="2025-12-05 10:58:42.914291872 +0000 UTC m=+1869.202397375" observedRunningTime="2025-12-05 10:58:43.358773135 +0000 UTC m=+1869.646878648" watchObservedRunningTime="2025-12-05 10:58:43.369174107 +0000 UTC m=+1869.657279620" Dec 05 10:59:55 crc kubenswrapper[4796]: I1205 10:59:55.091509 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dqs98"] Dec 05 10:59:55 crc kubenswrapper[4796]: I1205 10:59:55.094114 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dqs98" Dec 05 10:59:55 crc kubenswrapper[4796]: I1205 10:59:55.100146 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dqs98"] Dec 05 10:59:55 crc kubenswrapper[4796]: I1205 10:59:55.239888 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l85v2\" (UniqueName: \"kubernetes.io/projected/00680f91-31c0-4231-9b3d-8a1fda4608ea-kube-api-access-l85v2\") pod \"certified-operators-dqs98\" (UID: \"00680f91-31c0-4231-9b3d-8a1fda4608ea\") " pod="openshift-marketplace/certified-operators-dqs98" Dec 05 10:59:55 crc kubenswrapper[4796]: I1205 10:59:55.240384 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00680f91-31c0-4231-9b3d-8a1fda4608ea-utilities\") pod \"certified-operators-dqs98\" (UID: \"00680f91-31c0-4231-9b3d-8a1fda4608ea\") " pod="openshift-marketplace/certified-operators-dqs98" Dec 05 10:59:55 crc kubenswrapper[4796]: I1205 10:59:55.240543 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00680f91-31c0-4231-9b3d-8a1fda4608ea-catalog-content\") pod \"certified-operators-dqs98\" (UID: \"00680f91-31c0-4231-9b3d-8a1fda4608ea\") " pod="openshift-marketplace/certified-operators-dqs98" Dec 05 10:59:55 crc kubenswrapper[4796]: I1205 10:59:55.345343 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00680f91-31c0-4231-9b3d-8a1fda4608ea-catalog-content\") pod \"certified-operators-dqs98\" (UID: \"00680f91-31c0-4231-9b3d-8a1fda4608ea\") " pod="openshift-marketplace/certified-operators-dqs98" Dec 05 10:59:55 crc kubenswrapper[4796]: I1205 10:59:55.345480 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l85v2\" (UniqueName: \"kubernetes.io/projected/00680f91-31c0-4231-9b3d-8a1fda4608ea-kube-api-access-l85v2\") pod \"certified-operators-dqs98\" (UID: \"00680f91-31c0-4231-9b3d-8a1fda4608ea\") " pod="openshift-marketplace/certified-operators-dqs98" Dec 05 10:59:55 crc kubenswrapper[4796]: I1205 10:59:55.345501 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00680f91-31c0-4231-9b3d-8a1fda4608ea-utilities\") pod \"certified-operators-dqs98\" (UID: \"00680f91-31c0-4231-9b3d-8a1fda4608ea\") " pod="openshift-marketplace/certified-operators-dqs98" Dec 05 10:59:55 crc kubenswrapper[4796]: I1205 10:59:55.346068 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00680f91-31c0-4231-9b3d-8a1fda4608ea-catalog-content\") pod \"certified-operators-dqs98\" (UID: \"00680f91-31c0-4231-9b3d-8a1fda4608ea\") " pod="openshift-marketplace/certified-operators-dqs98" Dec 05 10:59:55 crc kubenswrapper[4796]: I1205 10:59:55.346419 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00680f91-31c0-4231-9b3d-8a1fda4608ea-utilities\") pod \"certified-operators-dqs98\" (UID: \"00680f91-31c0-4231-9b3d-8a1fda4608ea\") " pod="openshift-marketplace/certified-operators-dqs98" Dec 05 10:59:55 crc kubenswrapper[4796]: I1205 10:59:55.370445 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l85v2\" (UniqueName: \"kubernetes.io/projected/00680f91-31c0-4231-9b3d-8a1fda4608ea-kube-api-access-l85v2\") pod \"certified-operators-dqs98\" (UID: \"00680f91-31c0-4231-9b3d-8a1fda4608ea\") " pod="openshift-marketplace/certified-operators-dqs98" Dec 05 10:59:55 crc kubenswrapper[4796]: I1205 10:59:55.410611 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dqs98" Dec 05 10:59:55 crc kubenswrapper[4796]: I1205 10:59:55.846886 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dqs98"] Dec 05 10:59:56 crc kubenswrapper[4796]: I1205 10:59:56.004137 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqs98" event={"ID":"00680f91-31c0-4231-9b3d-8a1fda4608ea","Type":"ContainerStarted","Data":"6f80fa1a3f6fbdb81a04f8157ff44d55c74594258469b4e66f07da598f8ad51e"} Dec 05 10:59:56 crc kubenswrapper[4796]: I1205 10:59:56.004483 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqs98" event={"ID":"00680f91-31c0-4231-9b3d-8a1fda4608ea","Type":"ContainerStarted","Data":"8e5d0a6034c6d794345571efc618d79930aa736ca1906bd63653efecf398e538"} Dec 05 10:59:57 crc kubenswrapper[4796]: I1205 10:59:57.015479 4796 generic.go:334] "Generic (PLEG): container finished" podID="00680f91-31c0-4231-9b3d-8a1fda4608ea" containerID="6f80fa1a3f6fbdb81a04f8157ff44d55c74594258469b4e66f07da598f8ad51e" exitCode=0 Dec 05 10:59:57 crc kubenswrapper[4796]: I1205 10:59:57.015540 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqs98" event={"ID":"00680f91-31c0-4231-9b3d-8a1fda4608ea","Type":"ContainerDied","Data":"6f80fa1a3f6fbdb81a04f8157ff44d55c74594258469b4e66f07da598f8ad51e"} Dec 05 10:59:57 crc kubenswrapper[4796]: I1205 10:59:57.018356 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 10:59:58 crc kubenswrapper[4796]: I1205 10:59:58.026660 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqs98" event={"ID":"00680f91-31c0-4231-9b3d-8a1fda4608ea","Type":"ContainerStarted","Data":"9ba333fef1bdeff7fdacc2de07f338feb552bcd93f76828f2d0b3003d98b54d5"} Dec 05 10:59:59 crc kubenswrapper[4796]: I1205 10:59:59.036626 4796 generic.go:334] "Generic (PLEG): container finished" podID="00680f91-31c0-4231-9b3d-8a1fda4608ea" containerID="9ba333fef1bdeff7fdacc2de07f338feb552bcd93f76828f2d0b3003d98b54d5" exitCode=0 Dec 05 10:59:59 crc kubenswrapper[4796]: I1205 10:59:59.037019 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqs98" event={"ID":"00680f91-31c0-4231-9b3d-8a1fda4608ea","Type":"ContainerDied","Data":"9ba333fef1bdeff7fdacc2de07f338feb552bcd93f76828f2d0b3003d98b54d5"} Dec 05 11:00:00 crc kubenswrapper[4796]: I1205 11:00:00.048621 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqs98" event={"ID":"00680f91-31c0-4231-9b3d-8a1fda4608ea","Type":"ContainerStarted","Data":"6840ce40040728ccff4183893568519acbd32eb97cfcd81e23685ca2c166efba"} Dec 05 11:00:00 crc kubenswrapper[4796]: I1205 11:00:00.066610 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dqs98" podStartSLOduration=2.564009347 podStartE2EDuration="5.066588417s" podCreationTimestamp="2025-12-05 10:59:55 +0000 UTC" firstStartedPulling="2025-12-05 10:59:57.018064956 +0000 UTC m=+1943.306170469" lastFinishedPulling="2025-12-05 10:59:59.520644026 +0000 UTC m=+1945.808749539" observedRunningTime="2025-12-05 11:00:00.064066808 +0000 UTC m=+1946.352172321" watchObservedRunningTime="2025-12-05 11:00:00.066588417 +0000 UTC m=+1946.354693930" Dec 05 11:00:00 crc kubenswrapper[4796]: I1205 11:00:00.137014 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415540-4wrvk"] Dec 05 11:00:00 crc kubenswrapper[4796]: I1205 11:00:00.138133 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415540-4wrvk" Dec 05 11:00:00 crc kubenswrapper[4796]: I1205 11:00:00.139869 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 11:00:00 crc kubenswrapper[4796]: I1205 11:00:00.140649 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 11:00:00 crc kubenswrapper[4796]: I1205 11:00:00.146933 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415540-4wrvk"] Dec 05 11:00:00 crc kubenswrapper[4796]: I1205 11:00:00.155503 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58sc4\" (UniqueName: \"kubernetes.io/projected/77f8a181-b8ee-4404-9065-17bc3b52ff66-kube-api-access-58sc4\") pod \"collect-profiles-29415540-4wrvk\" (UID: \"77f8a181-b8ee-4404-9065-17bc3b52ff66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415540-4wrvk" Dec 05 11:00:00 crc kubenswrapper[4796]: I1205 11:00:00.155568 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77f8a181-b8ee-4404-9065-17bc3b52ff66-config-volume\") pod \"collect-profiles-29415540-4wrvk\" (UID: \"77f8a181-b8ee-4404-9065-17bc3b52ff66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415540-4wrvk" Dec 05 11:00:00 crc kubenswrapper[4796]: I1205 11:00:00.155610 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77f8a181-b8ee-4404-9065-17bc3b52ff66-secret-volume\") pod \"collect-profiles-29415540-4wrvk\" (UID: \"77f8a181-b8ee-4404-9065-17bc3b52ff66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415540-4wrvk" Dec 05 11:00:00 crc kubenswrapper[4796]: I1205 11:00:00.257264 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77f8a181-b8ee-4404-9065-17bc3b52ff66-config-volume\") pod \"collect-profiles-29415540-4wrvk\" (UID: \"77f8a181-b8ee-4404-9065-17bc3b52ff66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415540-4wrvk" Dec 05 11:00:00 crc kubenswrapper[4796]: I1205 11:00:00.257383 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77f8a181-b8ee-4404-9065-17bc3b52ff66-secret-volume\") pod \"collect-profiles-29415540-4wrvk\" (UID: \"77f8a181-b8ee-4404-9065-17bc3b52ff66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415540-4wrvk" Dec 05 11:00:00 crc kubenswrapper[4796]: I1205 11:00:00.257623 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58sc4\" (UniqueName: \"kubernetes.io/projected/77f8a181-b8ee-4404-9065-17bc3b52ff66-kube-api-access-58sc4\") pod \"collect-profiles-29415540-4wrvk\" (UID: \"77f8a181-b8ee-4404-9065-17bc3b52ff66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415540-4wrvk" Dec 05 11:00:00 crc kubenswrapper[4796]: I1205 11:00:00.258748 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77f8a181-b8ee-4404-9065-17bc3b52ff66-config-volume\") pod \"collect-profiles-29415540-4wrvk\" (UID: \"77f8a181-b8ee-4404-9065-17bc3b52ff66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415540-4wrvk" Dec 05 11:00:00 crc kubenswrapper[4796]: I1205 11:00:00.266134 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77f8a181-b8ee-4404-9065-17bc3b52ff66-secret-volume\") pod \"collect-profiles-29415540-4wrvk\" (UID: \"77f8a181-b8ee-4404-9065-17bc3b52ff66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415540-4wrvk" Dec 05 11:00:00 crc kubenswrapper[4796]: I1205 11:00:00.276840 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58sc4\" (UniqueName: \"kubernetes.io/projected/77f8a181-b8ee-4404-9065-17bc3b52ff66-kube-api-access-58sc4\") pod \"collect-profiles-29415540-4wrvk\" (UID: \"77f8a181-b8ee-4404-9065-17bc3b52ff66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415540-4wrvk" Dec 05 11:00:00 crc kubenswrapper[4796]: I1205 11:00:00.454210 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415540-4wrvk" Dec 05 11:00:00 crc kubenswrapper[4796]: I1205 11:00:00.846986 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415540-4wrvk"] Dec 05 11:00:01 crc kubenswrapper[4796]: I1205 11:00:01.059242 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415540-4wrvk" event={"ID":"77f8a181-b8ee-4404-9065-17bc3b52ff66","Type":"ContainerStarted","Data":"da56c6d2feb00da915ebb4e1da8635d50e50e4d25451a81a466deeb7d178a41a"} Dec 05 11:00:01 crc kubenswrapper[4796]: I1205 11:00:01.059307 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415540-4wrvk" event={"ID":"77f8a181-b8ee-4404-9065-17bc3b52ff66","Type":"ContainerStarted","Data":"eb3d82192b0d6904f7e4ee90f5f2363b18e23e5f632653e4f61b9f508fa50c9b"} Dec 05 11:00:01 crc kubenswrapper[4796]: I1205 11:00:01.082185 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29415540-4wrvk" podStartSLOduration=1.082167161 podStartE2EDuration="1.082167161s" podCreationTimestamp="2025-12-05 11:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 11:00:01.072237997 +0000 UTC m=+1947.360343530" watchObservedRunningTime="2025-12-05 11:00:01.082167161 +0000 UTC m=+1947.370272675" Dec 05 11:00:02 crc kubenswrapper[4796]: I1205 11:00:02.066406 4796 generic.go:334] "Generic (PLEG): container finished" podID="77f8a181-b8ee-4404-9065-17bc3b52ff66" containerID="da56c6d2feb00da915ebb4e1da8635d50e50e4d25451a81a466deeb7d178a41a" exitCode=0 Dec 05 11:00:02 crc kubenswrapper[4796]: I1205 11:00:02.066518 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415540-4wrvk" event={"ID":"77f8a181-b8ee-4404-9065-17bc3b52ff66","Type":"ContainerDied","Data":"da56c6d2feb00da915ebb4e1da8635d50e50e4d25451a81a466deeb7d178a41a"} Dec 05 11:00:03 crc kubenswrapper[4796]: I1205 11:00:03.364745 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415540-4wrvk" Dec 05 11:00:03 crc kubenswrapper[4796]: I1205 11:00:03.520220 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58sc4\" (UniqueName: \"kubernetes.io/projected/77f8a181-b8ee-4404-9065-17bc3b52ff66-kube-api-access-58sc4\") pod \"77f8a181-b8ee-4404-9065-17bc3b52ff66\" (UID: \"77f8a181-b8ee-4404-9065-17bc3b52ff66\") " Dec 05 11:00:03 crc kubenswrapper[4796]: I1205 11:00:03.520459 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77f8a181-b8ee-4404-9065-17bc3b52ff66-config-volume\") pod \"77f8a181-b8ee-4404-9065-17bc3b52ff66\" (UID: \"77f8a181-b8ee-4404-9065-17bc3b52ff66\") " Dec 05 11:00:03 crc kubenswrapper[4796]: I1205 11:00:03.520665 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77f8a181-b8ee-4404-9065-17bc3b52ff66-secret-volume\") pod \"77f8a181-b8ee-4404-9065-17bc3b52ff66\" (UID: \"77f8a181-b8ee-4404-9065-17bc3b52ff66\") " Dec 05 11:00:03 crc kubenswrapper[4796]: I1205 11:00:03.521231 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77f8a181-b8ee-4404-9065-17bc3b52ff66-config-volume" (OuterVolumeSpecName: "config-volume") pod "77f8a181-b8ee-4404-9065-17bc3b52ff66" (UID: "77f8a181-b8ee-4404-9065-17bc3b52ff66"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 11:00:03 crc kubenswrapper[4796]: I1205 11:00:03.525945 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77f8a181-b8ee-4404-9065-17bc3b52ff66-kube-api-access-58sc4" (OuterVolumeSpecName: "kube-api-access-58sc4") pod "77f8a181-b8ee-4404-9065-17bc3b52ff66" (UID: "77f8a181-b8ee-4404-9065-17bc3b52ff66"). InnerVolumeSpecName "kube-api-access-58sc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:00:03 crc kubenswrapper[4796]: I1205 11:00:03.526037 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77f8a181-b8ee-4404-9065-17bc3b52ff66-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "77f8a181-b8ee-4404-9065-17bc3b52ff66" (UID: "77f8a181-b8ee-4404-9065-17bc3b52ff66"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 11:00:03 crc kubenswrapper[4796]: I1205 11:00:03.622386 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77f8a181-b8ee-4404-9065-17bc3b52ff66-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 11:00:03 crc kubenswrapper[4796]: I1205 11:00:03.622416 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77f8a181-b8ee-4404-9065-17bc3b52ff66-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 11:00:03 crc kubenswrapper[4796]: I1205 11:00:03.622427 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58sc4\" (UniqueName: \"kubernetes.io/projected/77f8a181-b8ee-4404-9065-17bc3b52ff66-kube-api-access-58sc4\") on node \"crc\" DevicePath \"\"" Dec 05 11:00:04 crc kubenswrapper[4796]: I1205 11:00:04.084576 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415540-4wrvk" event={"ID":"77f8a181-b8ee-4404-9065-17bc3b52ff66","Type":"ContainerDied","Data":"eb3d82192b0d6904f7e4ee90f5f2363b18e23e5f632653e4f61b9f508fa50c9b"} Dec 05 11:00:04 crc kubenswrapper[4796]: I1205 11:00:04.084624 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb3d82192b0d6904f7e4ee90f5f2363b18e23e5f632653e4f61b9f508fa50c9b" Dec 05 11:00:04 crc kubenswrapper[4796]: I1205 11:00:04.084659 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415540-4wrvk" Dec 05 11:00:04 crc kubenswrapper[4796]: I1205 11:00:04.420019 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415495-lj8xl"] Dec 05 11:00:04 crc kubenswrapper[4796]: I1205 11:00:04.426010 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415495-lj8xl"] Dec 05 11:00:05 crc kubenswrapper[4796]: I1205 11:00:05.177728 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 11:00:05 crc kubenswrapper[4796]: I1205 11:00:05.178182 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 11:00:05 crc kubenswrapper[4796]: I1205 11:00:05.411581 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dqs98" Dec 05 11:00:05 crc kubenswrapper[4796]: I1205 11:00:05.411935 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dqs98" Dec 05 11:00:05 crc kubenswrapper[4796]: I1205 11:00:05.449093 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dqs98" Dec 05 11:00:06 crc kubenswrapper[4796]: I1205 11:00:06.040210 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9ebd86c-0c38-4954-8c59-a4e0168fb2d5" path="/var/lib/kubelet/pods/f9ebd86c-0c38-4954-8c59-a4e0168fb2d5/volumes" Dec 05 11:00:06 crc kubenswrapper[4796]: I1205 11:00:06.138372 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dqs98" Dec 05 11:00:06 crc kubenswrapper[4796]: I1205 11:00:06.187264 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dqs98"] Dec 05 11:00:08 crc kubenswrapper[4796]: I1205 11:00:08.115334 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dqs98" podUID="00680f91-31c0-4231-9b3d-8a1fda4608ea" containerName="registry-server" containerID="cri-o://6840ce40040728ccff4183893568519acbd32eb97cfcd81e23685ca2c166efba" gracePeriod=2 Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.001473 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dqs98" Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.032591 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00680f91-31c0-4231-9b3d-8a1fda4608ea-utilities\") pod \"00680f91-31c0-4231-9b3d-8a1fda4608ea\" (UID: \"00680f91-31c0-4231-9b3d-8a1fda4608ea\") " Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.032667 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00680f91-31c0-4231-9b3d-8a1fda4608ea-catalog-content\") pod \"00680f91-31c0-4231-9b3d-8a1fda4608ea\" (UID: \"00680f91-31c0-4231-9b3d-8a1fda4608ea\") " Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.032723 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l85v2\" (UniqueName: \"kubernetes.io/projected/00680f91-31c0-4231-9b3d-8a1fda4608ea-kube-api-access-l85v2\") pod \"00680f91-31c0-4231-9b3d-8a1fda4608ea\" (UID: \"00680f91-31c0-4231-9b3d-8a1fda4608ea\") " Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.033577 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00680f91-31c0-4231-9b3d-8a1fda4608ea-utilities" (OuterVolumeSpecName: "utilities") pod "00680f91-31c0-4231-9b3d-8a1fda4608ea" (UID: "00680f91-31c0-4231-9b3d-8a1fda4608ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.039112 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00680f91-31c0-4231-9b3d-8a1fda4608ea-kube-api-access-l85v2" (OuterVolumeSpecName: "kube-api-access-l85v2") pod "00680f91-31c0-4231-9b3d-8a1fda4608ea" (UID: "00680f91-31c0-4231-9b3d-8a1fda4608ea"). InnerVolumeSpecName "kube-api-access-l85v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.070881 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00680f91-31c0-4231-9b3d-8a1fda4608ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00680f91-31c0-4231-9b3d-8a1fda4608ea" (UID: "00680f91-31c0-4231-9b3d-8a1fda4608ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.126051 4796 generic.go:334] "Generic (PLEG): container finished" podID="00680f91-31c0-4231-9b3d-8a1fda4608ea" containerID="6840ce40040728ccff4183893568519acbd32eb97cfcd81e23685ca2c166efba" exitCode=0 Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.126113 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqs98" event={"ID":"00680f91-31c0-4231-9b3d-8a1fda4608ea","Type":"ContainerDied","Data":"6840ce40040728ccff4183893568519acbd32eb97cfcd81e23685ca2c166efba"} Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.126157 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqs98" event={"ID":"00680f91-31c0-4231-9b3d-8a1fda4608ea","Type":"ContainerDied","Data":"8e5d0a6034c6d794345571efc618d79930aa736ca1906bd63653efecf398e538"} Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.126179 4796 scope.go:117] "RemoveContainer" containerID="6840ce40040728ccff4183893568519acbd32eb97cfcd81e23685ca2c166efba" Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.126119 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dqs98" Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.134110 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00680f91-31c0-4231-9b3d-8a1fda4608ea-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.134139 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00680f91-31c0-4231-9b3d-8a1fda4608ea-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.134154 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l85v2\" (UniqueName: \"kubernetes.io/projected/00680f91-31c0-4231-9b3d-8a1fda4608ea-kube-api-access-l85v2\") on node \"crc\" DevicePath \"\"" Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.146176 4796 scope.go:117] "RemoveContainer" containerID="9ba333fef1bdeff7fdacc2de07f338feb552bcd93f76828f2d0b3003d98b54d5" Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.160371 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dqs98"] Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.168250 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dqs98"] Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.175111 4796 scope.go:117] "RemoveContainer" containerID="6f80fa1a3f6fbdb81a04f8157ff44d55c74594258469b4e66f07da598f8ad51e" Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.206289 4796 scope.go:117] "RemoveContainer" containerID="6840ce40040728ccff4183893568519acbd32eb97cfcd81e23685ca2c166efba" Dec 05 11:00:09 crc kubenswrapper[4796]: E1205 11:00:09.206815 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6840ce40040728ccff4183893568519acbd32eb97cfcd81e23685ca2c166efba\": container with ID starting with 6840ce40040728ccff4183893568519acbd32eb97cfcd81e23685ca2c166efba not found: ID does not exist" containerID="6840ce40040728ccff4183893568519acbd32eb97cfcd81e23685ca2c166efba" Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.206874 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6840ce40040728ccff4183893568519acbd32eb97cfcd81e23685ca2c166efba"} err="failed to get container status \"6840ce40040728ccff4183893568519acbd32eb97cfcd81e23685ca2c166efba\": rpc error: code = NotFound desc = could not find container \"6840ce40040728ccff4183893568519acbd32eb97cfcd81e23685ca2c166efba\": container with ID starting with 6840ce40040728ccff4183893568519acbd32eb97cfcd81e23685ca2c166efba not found: ID does not exist" Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.206920 4796 scope.go:117] "RemoveContainer" containerID="9ba333fef1bdeff7fdacc2de07f338feb552bcd93f76828f2d0b3003d98b54d5" Dec 05 11:00:09 crc kubenswrapper[4796]: E1205 11:00:09.207524 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ba333fef1bdeff7fdacc2de07f338feb552bcd93f76828f2d0b3003d98b54d5\": container with ID starting with 9ba333fef1bdeff7fdacc2de07f338feb552bcd93f76828f2d0b3003d98b54d5 not found: ID does not exist" containerID="9ba333fef1bdeff7fdacc2de07f338feb552bcd93f76828f2d0b3003d98b54d5" Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.207575 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ba333fef1bdeff7fdacc2de07f338feb552bcd93f76828f2d0b3003d98b54d5"} err="failed to get container status \"9ba333fef1bdeff7fdacc2de07f338feb552bcd93f76828f2d0b3003d98b54d5\": rpc error: code = NotFound desc = could not find container \"9ba333fef1bdeff7fdacc2de07f338feb552bcd93f76828f2d0b3003d98b54d5\": container with ID starting with 9ba333fef1bdeff7fdacc2de07f338feb552bcd93f76828f2d0b3003d98b54d5 not found: ID does not exist" Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.207613 4796 scope.go:117] "RemoveContainer" containerID="6f80fa1a3f6fbdb81a04f8157ff44d55c74594258469b4e66f07da598f8ad51e" Dec 05 11:00:09 crc kubenswrapper[4796]: E1205 11:00:09.208066 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f80fa1a3f6fbdb81a04f8157ff44d55c74594258469b4e66f07da598f8ad51e\": container with ID starting with 6f80fa1a3f6fbdb81a04f8157ff44d55c74594258469b4e66f07da598f8ad51e not found: ID does not exist" containerID="6f80fa1a3f6fbdb81a04f8157ff44d55c74594258469b4e66f07da598f8ad51e" Dec 05 11:00:09 crc kubenswrapper[4796]: I1205 11:00:09.208119 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f80fa1a3f6fbdb81a04f8157ff44d55c74594258469b4e66f07da598f8ad51e"} err="failed to get container status \"6f80fa1a3f6fbdb81a04f8157ff44d55c74594258469b4e66f07da598f8ad51e\": rpc error: code = NotFound desc = could not find container \"6f80fa1a3f6fbdb81a04f8157ff44d55c74594258469b4e66f07da598f8ad51e\": container with ID starting with 6f80fa1a3f6fbdb81a04f8157ff44d55c74594258469b4e66f07da598f8ad51e not found: ID does not exist" Dec 05 11:00:10 crc kubenswrapper[4796]: I1205 11:00:10.040134 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00680f91-31c0-4231-9b3d-8a1fda4608ea" path="/var/lib/kubelet/pods/00680f91-31c0-4231-9b3d-8a1fda4608ea/volumes" Dec 05 11:00:35 crc kubenswrapper[4796]: I1205 11:00:35.177185 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 11:00:35 crc kubenswrapper[4796]: I1205 11:00:35.177960 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 11:00:37 crc kubenswrapper[4796]: I1205 11:00:37.752290 4796 scope.go:117] "RemoveContainer" containerID="75be366635200236d7249af0c8be43ab5f0ec4777402f2f461c2e94a8a1b0278" Dec 05 11:00:39 crc kubenswrapper[4796]: I1205 11:00:39.389605 4796 generic.go:334] "Generic (PLEG): container finished" podID="43ae5283-caa9-4308-b825-3c937081341c" containerID="4b1a7f8ecf4831972763c354b5ab669628dc94a4dd749f2dbddeb5d5a46c6e8c" exitCode=0 Dec 05 11:00:39 crc kubenswrapper[4796]: I1205 11:00:39.389723 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" event={"ID":"43ae5283-caa9-4308-b825-3c937081341c","Type":"ContainerDied","Data":"4b1a7f8ecf4831972763c354b5ab669628dc94a4dd749f2dbddeb5d5a46c6e8c"} Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.728164 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.880900 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/43ae5283-caa9-4308-b825-3c937081341c-nova-extra-config-0\") pod \"43ae5283-caa9-4308-b825-3c937081341c\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.880951 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-migration-ssh-key-1\") pod \"43ae5283-caa9-4308-b825-3c937081341c\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.881029 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqw5d\" (UniqueName: \"kubernetes.io/projected/43ae5283-caa9-4308-b825-3c937081341c-kube-api-access-mqw5d\") pod \"43ae5283-caa9-4308-b825-3c937081341c\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.881275 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-ssh-key\") pod \"43ae5283-caa9-4308-b825-3c937081341c\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.881311 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-cell1-compute-config-0\") pod \"43ae5283-caa9-4308-b825-3c937081341c\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.881343 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-inventory\") pod \"43ae5283-caa9-4308-b825-3c937081341c\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.881414 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-migration-ssh-key-0\") pod \"43ae5283-caa9-4308-b825-3c937081341c\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.881436 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-cell1-compute-config-1\") pod \"43ae5283-caa9-4308-b825-3c937081341c\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.881476 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-combined-ca-bundle\") pod \"43ae5283-caa9-4308-b825-3c937081341c\" (UID: \"43ae5283-caa9-4308-b825-3c937081341c\") " Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.888820 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "43ae5283-caa9-4308-b825-3c937081341c" (UID: "43ae5283-caa9-4308-b825-3c937081341c"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.892877 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43ae5283-caa9-4308-b825-3c937081341c-kube-api-access-mqw5d" (OuterVolumeSpecName: "kube-api-access-mqw5d") pod "43ae5283-caa9-4308-b825-3c937081341c" (UID: "43ae5283-caa9-4308-b825-3c937081341c"). InnerVolumeSpecName "kube-api-access-mqw5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.910129 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-inventory" (OuterVolumeSpecName: "inventory") pod "43ae5283-caa9-4308-b825-3c937081341c" (UID: "43ae5283-caa9-4308-b825-3c937081341c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.911451 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "43ae5283-caa9-4308-b825-3c937081341c" (UID: "43ae5283-caa9-4308-b825-3c937081341c"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.911554 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43ae5283-caa9-4308-b825-3c937081341c-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "43ae5283-caa9-4308-b825-3c937081341c" (UID: "43ae5283-caa9-4308-b825-3c937081341c"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.913330 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "43ae5283-caa9-4308-b825-3c937081341c" (UID: "43ae5283-caa9-4308-b825-3c937081341c"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.913950 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "43ae5283-caa9-4308-b825-3c937081341c" (UID: "43ae5283-caa9-4308-b825-3c937081341c"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.914482 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "43ae5283-caa9-4308-b825-3c937081341c" (UID: "43ae5283-caa9-4308-b825-3c937081341c"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.924414 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "43ae5283-caa9-4308-b825-3c937081341c" (UID: "43ae5283-caa9-4308-b825-3c937081341c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.987861 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.987912 4796 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.987970 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.987982 4796 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.987992 4796 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.988001 4796 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.988016 4796 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/43ae5283-caa9-4308-b825-3c937081341c-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.988028 4796 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/43ae5283-caa9-4308-b825-3c937081341c-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 05 11:00:40 crc kubenswrapper[4796]: I1205 11:00:40.988039 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqw5d\" (UniqueName: \"kubernetes.io/projected/43ae5283-caa9-4308-b825-3c937081341c-kube-api-access-mqw5d\") on node \"crc\" DevicePath \"\"" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.412313 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" event={"ID":"43ae5283-caa9-4308-b825-3c937081341c","Type":"ContainerDied","Data":"53bb11da50b12c4bda6c0f5ffa939a1706f9b3de6aad6286f39867e85eac3ff9"} Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.412385 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53bb11da50b12c4bda6c0f5ffa939a1706f9b3de6aad6286f39867e85eac3ff9" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.412447 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-47tqx" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.500266 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4"] Dec 05 11:00:41 crc kubenswrapper[4796]: E1205 11:00:41.501086 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00680f91-31c0-4231-9b3d-8a1fda4608ea" containerName="extract-content" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.501117 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="00680f91-31c0-4231-9b3d-8a1fda4608ea" containerName="extract-content" Dec 05 11:00:41 crc kubenswrapper[4796]: E1205 11:00:41.501167 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43ae5283-caa9-4308-b825-3c937081341c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.501174 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="43ae5283-caa9-4308-b825-3c937081341c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 05 11:00:41 crc kubenswrapper[4796]: E1205 11:00:41.501207 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00680f91-31c0-4231-9b3d-8a1fda4608ea" containerName="registry-server" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.501215 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="00680f91-31c0-4231-9b3d-8a1fda4608ea" containerName="registry-server" Dec 05 11:00:41 crc kubenswrapper[4796]: E1205 11:00:41.501255 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00680f91-31c0-4231-9b3d-8a1fda4608ea" containerName="extract-utilities" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.501264 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="00680f91-31c0-4231-9b3d-8a1fda4608ea" containerName="extract-utilities" Dec 05 11:00:41 crc kubenswrapper[4796]: E1205 11:00:41.501283 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77f8a181-b8ee-4404-9065-17bc3b52ff66" containerName="collect-profiles" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.501291 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="77f8a181-b8ee-4404-9065-17bc3b52ff66" containerName="collect-profiles" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.501624 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="77f8a181-b8ee-4404-9065-17bc3b52ff66" containerName="collect-profiles" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.501657 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="43ae5283-caa9-4308-b825-3c937081341c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.501671 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="00680f91-31c0-4231-9b3d-8a1fda4608ea" containerName="registry-server" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.502846 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.506964 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4"] Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.507259 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-vmswc" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.507466 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.507502 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.507660 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.507740 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.600466 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.600522 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.600560 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.600615 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.600650 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqwfn\" (UniqueName: \"kubernetes.io/projected/91c465f7-7f18-43b4-9b15-d24ed713432f-kube-api-access-cqwfn\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.600874 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.601297 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.703101 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.703359 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.703390 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.703439 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.703471 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqwfn\" (UniqueName: \"kubernetes.io/projected/91c465f7-7f18-43b4-9b15-d24ed713432f-kube-api-access-cqwfn\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.703505 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.703582 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.707920 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.708335 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.708793 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.709217 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.709646 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.711329 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.719143 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqwfn\" (UniqueName: \"kubernetes.io/projected/91c465f7-7f18-43b4-9b15-d24ed713432f-kube-api-access-cqwfn\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:00:41 crc kubenswrapper[4796]: I1205 11:00:41.832819 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:00:42 crc kubenswrapper[4796]: I1205 11:00:42.341113 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4"] Dec 05 11:00:42 crc kubenswrapper[4796]: I1205 11:00:42.421148 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" event={"ID":"91c465f7-7f18-43b4-9b15-d24ed713432f","Type":"ContainerStarted","Data":"7d159fcdc8281bdcc80ca611c0d02e146143335103430cbdd73f36cb86313f2b"} Dec 05 11:00:43 crc kubenswrapper[4796]: I1205 11:00:43.433247 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" event={"ID":"91c465f7-7f18-43b4-9b15-d24ed713432f","Type":"ContainerStarted","Data":"746b3c36e18c05e5625e7a1c50d1e3be064138935ff87fa523e092dac7f578ea"} Dec 05 11:00:43 crc kubenswrapper[4796]: I1205 11:00:43.461218 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" podStartSLOduration=1.920783207 podStartE2EDuration="2.461194638s" podCreationTimestamp="2025-12-05 11:00:41 +0000 UTC" firstStartedPulling="2025-12-05 11:00:42.342597514 +0000 UTC m=+1988.630703027" lastFinishedPulling="2025-12-05 11:00:42.883008945 +0000 UTC m=+1989.171114458" observedRunningTime="2025-12-05 11:00:43.448094505 +0000 UTC m=+1989.736200018" watchObservedRunningTime="2025-12-05 11:00:43.461194638 +0000 UTC m=+1989.749300152" Dec 05 11:01:00 crc kubenswrapper[4796]: I1205 11:01:00.136932 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29415541-7j2vs"] Dec 05 11:01:00 crc kubenswrapper[4796]: I1205 11:01:00.138734 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29415541-7j2vs" Dec 05 11:01:00 crc kubenswrapper[4796]: I1205 11:01:00.162640 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29415541-7j2vs"] Dec 05 11:01:00 crc kubenswrapper[4796]: I1205 11:01:00.210733 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8745cb1c-046c-423c-ada4-99fac12690eb-fernet-keys\") pod \"keystone-cron-29415541-7j2vs\" (UID: \"8745cb1c-046c-423c-ada4-99fac12690eb\") " pod="openstack/keystone-cron-29415541-7j2vs" Dec 05 11:01:00 crc kubenswrapper[4796]: I1205 11:01:00.210827 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d75c\" (UniqueName: \"kubernetes.io/projected/8745cb1c-046c-423c-ada4-99fac12690eb-kube-api-access-9d75c\") pod \"keystone-cron-29415541-7j2vs\" (UID: \"8745cb1c-046c-423c-ada4-99fac12690eb\") " pod="openstack/keystone-cron-29415541-7j2vs" Dec 05 11:01:00 crc kubenswrapper[4796]: I1205 11:01:00.210971 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8745cb1c-046c-423c-ada4-99fac12690eb-combined-ca-bundle\") pod \"keystone-cron-29415541-7j2vs\" (UID: \"8745cb1c-046c-423c-ada4-99fac12690eb\") " pod="openstack/keystone-cron-29415541-7j2vs" Dec 05 11:01:00 crc kubenswrapper[4796]: I1205 11:01:00.211137 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8745cb1c-046c-423c-ada4-99fac12690eb-config-data\") pod \"keystone-cron-29415541-7j2vs\" (UID: \"8745cb1c-046c-423c-ada4-99fac12690eb\") " pod="openstack/keystone-cron-29415541-7j2vs" Dec 05 11:01:00 crc kubenswrapper[4796]: I1205 11:01:00.312880 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8745cb1c-046c-423c-ada4-99fac12690eb-config-data\") pod \"keystone-cron-29415541-7j2vs\" (UID: \"8745cb1c-046c-423c-ada4-99fac12690eb\") " pod="openstack/keystone-cron-29415541-7j2vs" Dec 05 11:01:00 crc kubenswrapper[4796]: I1205 11:01:00.312966 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8745cb1c-046c-423c-ada4-99fac12690eb-fernet-keys\") pod \"keystone-cron-29415541-7j2vs\" (UID: \"8745cb1c-046c-423c-ada4-99fac12690eb\") " pod="openstack/keystone-cron-29415541-7j2vs" Dec 05 11:01:00 crc kubenswrapper[4796]: I1205 11:01:00.313014 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d75c\" (UniqueName: \"kubernetes.io/projected/8745cb1c-046c-423c-ada4-99fac12690eb-kube-api-access-9d75c\") pod \"keystone-cron-29415541-7j2vs\" (UID: \"8745cb1c-046c-423c-ada4-99fac12690eb\") " pod="openstack/keystone-cron-29415541-7j2vs" Dec 05 11:01:00 crc kubenswrapper[4796]: I1205 11:01:00.313074 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8745cb1c-046c-423c-ada4-99fac12690eb-combined-ca-bundle\") pod \"keystone-cron-29415541-7j2vs\" (UID: \"8745cb1c-046c-423c-ada4-99fac12690eb\") " pod="openstack/keystone-cron-29415541-7j2vs" Dec 05 11:01:00 crc kubenswrapper[4796]: I1205 11:01:00.320074 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8745cb1c-046c-423c-ada4-99fac12690eb-config-data\") pod \"keystone-cron-29415541-7j2vs\" (UID: \"8745cb1c-046c-423c-ada4-99fac12690eb\") " pod="openstack/keystone-cron-29415541-7j2vs" Dec 05 11:01:00 crc kubenswrapper[4796]: I1205 11:01:00.320281 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8745cb1c-046c-423c-ada4-99fac12690eb-combined-ca-bundle\") pod \"keystone-cron-29415541-7j2vs\" (UID: \"8745cb1c-046c-423c-ada4-99fac12690eb\") " pod="openstack/keystone-cron-29415541-7j2vs" Dec 05 11:01:00 crc kubenswrapper[4796]: I1205 11:01:00.320570 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8745cb1c-046c-423c-ada4-99fac12690eb-fernet-keys\") pod \"keystone-cron-29415541-7j2vs\" (UID: \"8745cb1c-046c-423c-ada4-99fac12690eb\") " pod="openstack/keystone-cron-29415541-7j2vs" Dec 05 11:01:00 crc kubenswrapper[4796]: I1205 11:01:00.329062 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d75c\" (UniqueName: \"kubernetes.io/projected/8745cb1c-046c-423c-ada4-99fac12690eb-kube-api-access-9d75c\") pod \"keystone-cron-29415541-7j2vs\" (UID: \"8745cb1c-046c-423c-ada4-99fac12690eb\") " pod="openstack/keystone-cron-29415541-7j2vs" Dec 05 11:01:00 crc kubenswrapper[4796]: I1205 11:01:00.455359 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29415541-7j2vs" Dec 05 11:01:00 crc kubenswrapper[4796]: I1205 11:01:00.868614 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29415541-7j2vs"] Dec 05 11:01:01 crc kubenswrapper[4796]: I1205 11:01:01.599287 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29415541-7j2vs" event={"ID":"8745cb1c-046c-423c-ada4-99fac12690eb","Type":"ContainerStarted","Data":"d2b6e2f167f1123e450991d82b458e72042c786de2f93d2d740ef93735431d0c"} Dec 05 11:01:01 crc kubenswrapper[4796]: I1205 11:01:01.599653 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29415541-7j2vs" event={"ID":"8745cb1c-046c-423c-ada4-99fac12690eb","Type":"ContainerStarted","Data":"889404a703b8d52cc590db0c926360e17cec7f0d180ace64edb773a7f8a19b1e"} Dec 05 11:01:01 crc kubenswrapper[4796]: I1205 11:01:01.615135 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29415541-7j2vs" podStartSLOduration=1.615122044 podStartE2EDuration="1.615122044s" podCreationTimestamp="2025-12-05 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 11:01:01.611812253 +0000 UTC m=+2007.899917757" watchObservedRunningTime="2025-12-05 11:01:01.615122044 +0000 UTC m=+2007.903227557" Dec 05 11:01:03 crc kubenswrapper[4796]: I1205 11:01:03.618037 4796 generic.go:334] "Generic (PLEG): container finished" podID="8745cb1c-046c-423c-ada4-99fac12690eb" containerID="d2b6e2f167f1123e450991d82b458e72042c786de2f93d2d740ef93735431d0c" exitCode=0 Dec 05 11:01:03 crc kubenswrapper[4796]: I1205 11:01:03.618125 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29415541-7j2vs" event={"ID":"8745cb1c-046c-423c-ada4-99fac12690eb","Type":"ContainerDied","Data":"d2b6e2f167f1123e450991d82b458e72042c786de2f93d2d740ef93735431d0c"} Dec 05 11:01:04 crc kubenswrapper[4796]: I1205 11:01:04.895262 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29415541-7j2vs" Dec 05 11:01:04 crc kubenswrapper[4796]: I1205 11:01:04.915194 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8745cb1c-046c-423c-ada4-99fac12690eb-combined-ca-bundle\") pod \"8745cb1c-046c-423c-ada4-99fac12690eb\" (UID: \"8745cb1c-046c-423c-ada4-99fac12690eb\") " Dec 05 11:01:04 crc kubenswrapper[4796]: I1205 11:01:04.943201 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8745cb1c-046c-423c-ada4-99fac12690eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8745cb1c-046c-423c-ada4-99fac12690eb" (UID: "8745cb1c-046c-423c-ada4-99fac12690eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 11:01:05 crc kubenswrapper[4796]: I1205 11:01:05.019366 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8745cb1c-046c-423c-ada4-99fac12690eb-config-data\") pod \"8745cb1c-046c-423c-ada4-99fac12690eb\" (UID: \"8745cb1c-046c-423c-ada4-99fac12690eb\") " Dec 05 11:01:05 crc kubenswrapper[4796]: I1205 11:01:05.019531 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8745cb1c-046c-423c-ada4-99fac12690eb-fernet-keys\") pod \"8745cb1c-046c-423c-ada4-99fac12690eb\" (UID: \"8745cb1c-046c-423c-ada4-99fac12690eb\") " Dec 05 11:01:05 crc kubenswrapper[4796]: I1205 11:01:05.019585 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d75c\" (UniqueName: \"kubernetes.io/projected/8745cb1c-046c-423c-ada4-99fac12690eb-kube-api-access-9d75c\") pod \"8745cb1c-046c-423c-ada4-99fac12690eb\" (UID: \"8745cb1c-046c-423c-ada4-99fac12690eb\") " Dec 05 11:01:05 crc kubenswrapper[4796]: I1205 11:01:05.020718 4796 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8745cb1c-046c-423c-ada4-99fac12690eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 11:01:05 crc kubenswrapper[4796]: I1205 11:01:05.024066 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8745cb1c-046c-423c-ada4-99fac12690eb-kube-api-access-9d75c" (OuterVolumeSpecName: "kube-api-access-9d75c") pod "8745cb1c-046c-423c-ada4-99fac12690eb" (UID: "8745cb1c-046c-423c-ada4-99fac12690eb"). InnerVolumeSpecName "kube-api-access-9d75c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:01:05 crc kubenswrapper[4796]: I1205 11:01:05.024394 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8745cb1c-046c-423c-ada4-99fac12690eb-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8745cb1c-046c-423c-ada4-99fac12690eb" (UID: "8745cb1c-046c-423c-ada4-99fac12690eb"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 11:01:05 crc kubenswrapper[4796]: I1205 11:01:05.064070 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8745cb1c-046c-423c-ada4-99fac12690eb-config-data" (OuterVolumeSpecName: "config-data") pod "8745cb1c-046c-423c-ada4-99fac12690eb" (UID: "8745cb1c-046c-423c-ada4-99fac12690eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 11:01:05 crc kubenswrapper[4796]: I1205 11:01:05.123591 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8745cb1c-046c-423c-ada4-99fac12690eb-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 11:01:05 crc kubenswrapper[4796]: I1205 11:01:05.123633 4796 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8745cb1c-046c-423c-ada4-99fac12690eb-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 05 11:01:05 crc kubenswrapper[4796]: I1205 11:01:05.123644 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d75c\" (UniqueName: \"kubernetes.io/projected/8745cb1c-046c-423c-ada4-99fac12690eb-kube-api-access-9d75c\") on node \"crc\" DevicePath \"\"" Dec 05 11:01:05 crc kubenswrapper[4796]: I1205 11:01:05.177120 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 11:01:05 crc kubenswrapper[4796]: I1205 11:01:05.177200 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 11:01:05 crc kubenswrapper[4796]: I1205 11:01:05.177267 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 11:01:05 crc kubenswrapper[4796]: I1205 11:01:05.178103 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dcb01733a1303955cc7132d26ec26ddb1901af7c36f36800bac1df8150042858"} pod="openshift-machine-config-operator/machine-config-daemon-9pllw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 11:01:05 crc kubenswrapper[4796]: I1205 11:01:05.178173 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" containerID="cri-o://dcb01733a1303955cc7132d26ec26ddb1901af7c36f36800bac1df8150042858" gracePeriod=600 Dec 05 11:01:05 crc kubenswrapper[4796]: I1205 11:01:05.639302 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29415541-7j2vs" Dec 05 11:01:05 crc kubenswrapper[4796]: I1205 11:01:05.639298 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29415541-7j2vs" event={"ID":"8745cb1c-046c-423c-ada4-99fac12690eb","Type":"ContainerDied","Data":"889404a703b8d52cc590db0c926360e17cec7f0d180ace64edb773a7f8a19b1e"} Dec 05 11:01:05 crc kubenswrapper[4796]: I1205 11:01:05.639748 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="889404a703b8d52cc590db0c926360e17cec7f0d180ace64edb773a7f8a19b1e" Dec 05 11:01:05 crc kubenswrapper[4796]: I1205 11:01:05.643705 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerDied","Data":"dcb01733a1303955cc7132d26ec26ddb1901af7c36f36800bac1df8150042858"} Dec 05 11:01:05 crc kubenswrapper[4796]: I1205 11:01:05.643858 4796 scope.go:117] "RemoveContainer" containerID="c7e92142eecf3a4a1a429cb5a8208a70ba577711a5f6b8b9759bae3494efeb03" Dec 05 11:01:05 crc kubenswrapper[4796]: I1205 11:01:05.643698 4796 generic.go:334] "Generic (PLEG): container finished" podID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerID="dcb01733a1303955cc7132d26ec26ddb1901af7c36f36800bac1df8150042858" exitCode=0 Dec 05 11:01:05 crc kubenswrapper[4796]: I1205 11:01:05.644045 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerStarted","Data":"9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377"} Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.128495 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-prh7p"] Dec 05 11:01:38 crc kubenswrapper[4796]: E1205 11:01:38.129647 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8745cb1c-046c-423c-ada4-99fac12690eb" containerName="keystone-cron" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.129662 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8745cb1c-046c-423c-ada4-99fac12690eb" containerName="keystone-cron" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.130022 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8745cb1c-046c-423c-ada4-99fac12690eb" containerName="keystone-cron" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.131506 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-prh7p" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.141936 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-prh7p"] Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.233567 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/765e2de6-f9ec-4ed0-b8db-f97b834f7bd3-utilities\") pod \"redhat-marketplace-prh7p\" (UID: \"765e2de6-f9ec-4ed0-b8db-f97b834f7bd3\") " pod="openshift-marketplace/redhat-marketplace-prh7p" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.233620 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7s9m\" (UniqueName: \"kubernetes.io/projected/765e2de6-f9ec-4ed0-b8db-f97b834f7bd3-kube-api-access-l7s9m\") pod \"redhat-marketplace-prh7p\" (UID: \"765e2de6-f9ec-4ed0-b8db-f97b834f7bd3\") " pod="openshift-marketplace/redhat-marketplace-prh7p" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.233847 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/765e2de6-f9ec-4ed0-b8db-f97b834f7bd3-catalog-content\") pod \"redhat-marketplace-prh7p\" (UID: \"765e2de6-f9ec-4ed0-b8db-f97b834f7bd3\") " pod="openshift-marketplace/redhat-marketplace-prh7p" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.324252 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s9x8m"] Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.326002 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s9x8m" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.336515 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s9x8m"] Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.336661 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/765e2de6-f9ec-4ed0-b8db-f97b834f7bd3-utilities\") pod \"redhat-marketplace-prh7p\" (UID: \"765e2de6-f9ec-4ed0-b8db-f97b834f7bd3\") " pod="openshift-marketplace/redhat-marketplace-prh7p" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.336724 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7s9m\" (UniqueName: \"kubernetes.io/projected/765e2de6-f9ec-4ed0-b8db-f97b834f7bd3-kube-api-access-l7s9m\") pod \"redhat-marketplace-prh7p\" (UID: \"765e2de6-f9ec-4ed0-b8db-f97b834f7bd3\") " pod="openshift-marketplace/redhat-marketplace-prh7p" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.336786 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/765e2de6-f9ec-4ed0-b8db-f97b834f7bd3-catalog-content\") pod \"redhat-marketplace-prh7p\" (UID: \"765e2de6-f9ec-4ed0-b8db-f97b834f7bd3\") " pod="openshift-marketplace/redhat-marketplace-prh7p" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.336848 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde0671d-1321-48ae-8fbd-f04694e87bf4-utilities\") pod \"community-operators-s9x8m\" (UID: \"cde0671d-1321-48ae-8fbd-f04694e87bf4\") " pod="openshift-marketplace/community-operators-s9x8m" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.336880 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgcq6\" (UniqueName: \"kubernetes.io/projected/cde0671d-1321-48ae-8fbd-f04694e87bf4-kube-api-access-jgcq6\") pod \"community-operators-s9x8m\" (UID: \"cde0671d-1321-48ae-8fbd-f04694e87bf4\") " pod="openshift-marketplace/community-operators-s9x8m" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.336924 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde0671d-1321-48ae-8fbd-f04694e87bf4-catalog-content\") pod \"community-operators-s9x8m\" (UID: \"cde0671d-1321-48ae-8fbd-f04694e87bf4\") " pod="openshift-marketplace/community-operators-s9x8m" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.337281 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/765e2de6-f9ec-4ed0-b8db-f97b834f7bd3-catalog-content\") pod \"redhat-marketplace-prh7p\" (UID: \"765e2de6-f9ec-4ed0-b8db-f97b834f7bd3\") " pod="openshift-marketplace/redhat-marketplace-prh7p" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.340792 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/765e2de6-f9ec-4ed0-b8db-f97b834f7bd3-utilities\") pod \"redhat-marketplace-prh7p\" (UID: \"765e2de6-f9ec-4ed0-b8db-f97b834f7bd3\") " pod="openshift-marketplace/redhat-marketplace-prh7p" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.384196 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7s9m\" (UniqueName: \"kubernetes.io/projected/765e2de6-f9ec-4ed0-b8db-f97b834f7bd3-kube-api-access-l7s9m\") pod \"redhat-marketplace-prh7p\" (UID: \"765e2de6-f9ec-4ed0-b8db-f97b834f7bd3\") " pod="openshift-marketplace/redhat-marketplace-prh7p" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.438561 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde0671d-1321-48ae-8fbd-f04694e87bf4-utilities\") pod \"community-operators-s9x8m\" (UID: \"cde0671d-1321-48ae-8fbd-f04694e87bf4\") " pod="openshift-marketplace/community-operators-s9x8m" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.438660 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgcq6\" (UniqueName: \"kubernetes.io/projected/cde0671d-1321-48ae-8fbd-f04694e87bf4-kube-api-access-jgcq6\") pod \"community-operators-s9x8m\" (UID: \"cde0671d-1321-48ae-8fbd-f04694e87bf4\") " pod="openshift-marketplace/community-operators-s9x8m" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.438738 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde0671d-1321-48ae-8fbd-f04694e87bf4-catalog-content\") pod \"community-operators-s9x8m\" (UID: \"cde0671d-1321-48ae-8fbd-f04694e87bf4\") " pod="openshift-marketplace/community-operators-s9x8m" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.439429 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde0671d-1321-48ae-8fbd-f04694e87bf4-catalog-content\") pod \"community-operators-s9x8m\" (UID: \"cde0671d-1321-48ae-8fbd-f04694e87bf4\") " pod="openshift-marketplace/community-operators-s9x8m" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.439672 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde0671d-1321-48ae-8fbd-f04694e87bf4-utilities\") pod \"community-operators-s9x8m\" (UID: \"cde0671d-1321-48ae-8fbd-f04694e87bf4\") " pod="openshift-marketplace/community-operators-s9x8m" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.448308 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-prh7p" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.454073 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgcq6\" (UniqueName: \"kubernetes.io/projected/cde0671d-1321-48ae-8fbd-f04694e87bf4-kube-api-access-jgcq6\") pod \"community-operators-s9x8m\" (UID: \"cde0671d-1321-48ae-8fbd-f04694e87bf4\") " pod="openshift-marketplace/community-operators-s9x8m" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.644154 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s9x8m" Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.896571 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-prh7p"] Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.961860 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prh7p" event={"ID":"765e2de6-f9ec-4ed0-b8db-f97b834f7bd3","Type":"ContainerStarted","Data":"bdcb6dc2ba7859aa3cbc42d36d0c60474bf0d780b6e471167af82dc997ed4a75"} Dec 05 11:01:38 crc kubenswrapper[4796]: I1205 11:01:38.977963 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s9x8m"] Dec 05 11:01:39 crc kubenswrapper[4796]: I1205 11:01:39.973795 4796 generic.go:334] "Generic (PLEG): container finished" podID="cde0671d-1321-48ae-8fbd-f04694e87bf4" containerID="e7e24e76e1a172699495237464ecee732024f656a09c28201dbef3a14e49d8e7" exitCode=0 Dec 05 11:01:39 crc kubenswrapper[4796]: I1205 11:01:39.973859 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9x8m" event={"ID":"cde0671d-1321-48ae-8fbd-f04694e87bf4","Type":"ContainerDied","Data":"e7e24e76e1a172699495237464ecee732024f656a09c28201dbef3a14e49d8e7"} Dec 05 11:01:39 crc kubenswrapper[4796]: I1205 11:01:39.974581 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9x8m" event={"ID":"cde0671d-1321-48ae-8fbd-f04694e87bf4","Type":"ContainerStarted","Data":"cfeb99789f5366a9c01ab958a74e2317abb35e1758d0a2680ab1ac91b72d0b34"} Dec 05 11:01:39 crc kubenswrapper[4796]: I1205 11:01:39.976879 4796 generic.go:334] "Generic (PLEG): container finished" podID="765e2de6-f9ec-4ed0-b8db-f97b834f7bd3" containerID="4e0b56487c0891769b900785c75740ae8b2d583aef8fdba8634f334aa546957f" exitCode=0 Dec 05 11:01:39 crc kubenswrapper[4796]: I1205 11:01:39.976958 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prh7p" event={"ID":"765e2de6-f9ec-4ed0-b8db-f97b834f7bd3","Type":"ContainerDied","Data":"4e0b56487c0891769b900785c75740ae8b2d583aef8fdba8634f334aa546957f"} Dec 05 11:01:40 crc kubenswrapper[4796]: I1205 11:01:40.986453 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9x8m" event={"ID":"cde0671d-1321-48ae-8fbd-f04694e87bf4","Type":"ContainerStarted","Data":"7ce08d224e2f3f558c49bd9b164cbe3b2709ac17e4ea029b0e756eee0a2d3373"} Dec 05 11:01:40 crc kubenswrapper[4796]: I1205 11:01:40.988432 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prh7p" event={"ID":"765e2de6-f9ec-4ed0-b8db-f97b834f7bd3","Type":"ContainerStarted","Data":"b3059395643f964e314275214c073ca66c756414a3a4cbb76f57f4e329622c93"} Dec 05 11:01:42 crc kubenswrapper[4796]: I1205 11:01:42.000555 4796 generic.go:334] "Generic (PLEG): container finished" podID="765e2de6-f9ec-4ed0-b8db-f97b834f7bd3" containerID="b3059395643f964e314275214c073ca66c756414a3a4cbb76f57f4e329622c93" exitCode=0 Dec 05 11:01:42 crc kubenswrapper[4796]: I1205 11:01:42.000742 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prh7p" event={"ID":"765e2de6-f9ec-4ed0-b8db-f97b834f7bd3","Type":"ContainerDied","Data":"b3059395643f964e314275214c073ca66c756414a3a4cbb76f57f4e329622c93"} Dec 05 11:01:42 crc kubenswrapper[4796]: I1205 11:01:42.003984 4796 generic.go:334] "Generic (PLEG): container finished" podID="cde0671d-1321-48ae-8fbd-f04694e87bf4" containerID="7ce08d224e2f3f558c49bd9b164cbe3b2709ac17e4ea029b0e756eee0a2d3373" exitCode=0 Dec 05 11:01:42 crc kubenswrapper[4796]: I1205 11:01:42.004018 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9x8m" event={"ID":"cde0671d-1321-48ae-8fbd-f04694e87bf4","Type":"ContainerDied","Data":"7ce08d224e2f3f558c49bd9b164cbe3b2709ac17e4ea029b0e756eee0a2d3373"} Dec 05 11:01:43 crc kubenswrapper[4796]: I1205 11:01:43.013277 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prh7p" event={"ID":"765e2de6-f9ec-4ed0-b8db-f97b834f7bd3","Type":"ContainerStarted","Data":"2d5acc75cddc19dc66875ba586c5cc7f63bf02d40845e5885f6635f32e05592d"} Dec 05 11:01:43 crc kubenswrapper[4796]: I1205 11:01:43.016195 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9x8m" event={"ID":"cde0671d-1321-48ae-8fbd-f04694e87bf4","Type":"ContainerStarted","Data":"10bb981ab45f90e4a9484cce361513f3d0a103c7325469ebf3f023350f5791a0"} Dec 05 11:01:43 crc kubenswrapper[4796]: I1205 11:01:43.030658 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-prh7p" podStartSLOduration=2.539064327 podStartE2EDuration="5.030639161s" podCreationTimestamp="2025-12-05 11:01:38 +0000 UTC" firstStartedPulling="2025-12-05 11:01:39.979534899 +0000 UTC m=+2046.267640413" lastFinishedPulling="2025-12-05 11:01:42.471109735 +0000 UTC m=+2048.759215247" observedRunningTime="2025-12-05 11:01:43.026990562 +0000 UTC m=+2049.315096075" watchObservedRunningTime="2025-12-05 11:01:43.030639161 +0000 UTC m=+2049.318744675" Dec 05 11:01:43 crc kubenswrapper[4796]: I1205 11:01:43.049071 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s9x8m" podStartSLOduration=2.46942132 podStartE2EDuration="5.049052452s" podCreationTimestamp="2025-12-05 11:01:38 +0000 UTC" firstStartedPulling="2025-12-05 11:01:39.975840565 +0000 UTC m=+2046.263946079" lastFinishedPulling="2025-12-05 11:01:42.555471697 +0000 UTC m=+2048.843577211" observedRunningTime="2025-12-05 11:01:43.045633305 +0000 UTC m=+2049.333738818" watchObservedRunningTime="2025-12-05 11:01:43.049052452 +0000 UTC m=+2049.337157954" Dec 05 11:01:48 crc kubenswrapper[4796]: I1205 11:01:48.449254 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-prh7p" Dec 05 11:01:48 crc kubenswrapper[4796]: I1205 11:01:48.449814 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-prh7p" Dec 05 11:01:48 crc kubenswrapper[4796]: I1205 11:01:48.487207 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-prh7p" Dec 05 11:01:48 crc kubenswrapper[4796]: I1205 11:01:48.645368 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s9x8m" Dec 05 11:01:48 crc kubenswrapper[4796]: I1205 11:01:48.645803 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s9x8m" Dec 05 11:01:48 crc kubenswrapper[4796]: I1205 11:01:48.687782 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s9x8m" Dec 05 11:01:49 crc kubenswrapper[4796]: I1205 11:01:49.115227 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s9x8m" Dec 05 11:01:49 crc kubenswrapper[4796]: I1205 11:01:49.117411 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-prh7p" Dec 05 11:01:49 crc kubenswrapper[4796]: I1205 11:01:49.914008 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s9x8m"] Dec 05 11:01:50 crc kubenswrapper[4796]: I1205 11:01:50.917939 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-prh7p"] Dec 05 11:01:51 crc kubenswrapper[4796]: I1205 11:01:51.088145 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-prh7p" podUID="765e2de6-f9ec-4ed0-b8db-f97b834f7bd3" containerName="registry-server" containerID="cri-o://2d5acc75cddc19dc66875ba586c5cc7f63bf02d40845e5885f6635f32e05592d" gracePeriod=2 Dec 05 11:01:51 crc kubenswrapper[4796]: I1205 11:01:51.088284 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s9x8m" podUID="cde0671d-1321-48ae-8fbd-f04694e87bf4" containerName="registry-server" containerID="cri-o://10bb981ab45f90e4a9484cce361513f3d0a103c7325469ebf3f023350f5791a0" gracePeriod=2 Dec 05 11:01:51 crc kubenswrapper[4796]: I1205 11:01:51.517525 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s9x8m" Dec 05 11:01:51 crc kubenswrapper[4796]: I1205 11:01:51.522492 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-prh7p" Dec 05 11:01:51 crc kubenswrapper[4796]: I1205 11:01:51.627400 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/765e2de6-f9ec-4ed0-b8db-f97b834f7bd3-utilities\") pod \"765e2de6-f9ec-4ed0-b8db-f97b834f7bd3\" (UID: \"765e2de6-f9ec-4ed0-b8db-f97b834f7bd3\") " Dec 05 11:01:51 crc kubenswrapper[4796]: I1205 11:01:51.627526 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde0671d-1321-48ae-8fbd-f04694e87bf4-catalog-content\") pod \"cde0671d-1321-48ae-8fbd-f04694e87bf4\" (UID: \"cde0671d-1321-48ae-8fbd-f04694e87bf4\") " Dec 05 11:01:51 crc kubenswrapper[4796]: I1205 11:01:51.627562 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgcq6\" (UniqueName: \"kubernetes.io/projected/cde0671d-1321-48ae-8fbd-f04694e87bf4-kube-api-access-jgcq6\") pod \"cde0671d-1321-48ae-8fbd-f04694e87bf4\" (UID: \"cde0671d-1321-48ae-8fbd-f04694e87bf4\") " Dec 05 11:01:51 crc kubenswrapper[4796]: I1205 11:01:51.627771 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7s9m\" (UniqueName: \"kubernetes.io/projected/765e2de6-f9ec-4ed0-b8db-f97b834f7bd3-kube-api-access-l7s9m\") pod \"765e2de6-f9ec-4ed0-b8db-f97b834f7bd3\" (UID: \"765e2de6-f9ec-4ed0-b8db-f97b834f7bd3\") " Dec 05 11:01:51 crc kubenswrapper[4796]: I1205 11:01:51.627793 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde0671d-1321-48ae-8fbd-f04694e87bf4-utilities\") pod \"cde0671d-1321-48ae-8fbd-f04694e87bf4\" (UID: \"cde0671d-1321-48ae-8fbd-f04694e87bf4\") " Dec 05 11:01:51 crc kubenswrapper[4796]: I1205 11:01:51.627821 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/765e2de6-f9ec-4ed0-b8db-f97b834f7bd3-catalog-content\") pod \"765e2de6-f9ec-4ed0-b8db-f97b834f7bd3\" (UID: \"765e2de6-f9ec-4ed0-b8db-f97b834f7bd3\") " Dec 05 11:01:51 crc kubenswrapper[4796]: I1205 11:01:51.628331 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/765e2de6-f9ec-4ed0-b8db-f97b834f7bd3-utilities" (OuterVolumeSpecName: "utilities") pod "765e2de6-f9ec-4ed0-b8db-f97b834f7bd3" (UID: "765e2de6-f9ec-4ed0-b8db-f97b834f7bd3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:01:51 crc kubenswrapper[4796]: I1205 11:01:51.628746 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cde0671d-1321-48ae-8fbd-f04694e87bf4-utilities" (OuterVolumeSpecName: "utilities") pod "cde0671d-1321-48ae-8fbd-f04694e87bf4" (UID: "cde0671d-1321-48ae-8fbd-f04694e87bf4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:01:51 crc kubenswrapper[4796]: I1205 11:01:51.634907 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cde0671d-1321-48ae-8fbd-f04694e87bf4-kube-api-access-jgcq6" (OuterVolumeSpecName: "kube-api-access-jgcq6") pod "cde0671d-1321-48ae-8fbd-f04694e87bf4" (UID: "cde0671d-1321-48ae-8fbd-f04694e87bf4"). InnerVolumeSpecName "kube-api-access-jgcq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:01:51 crc kubenswrapper[4796]: I1205 11:01:51.634964 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/765e2de6-f9ec-4ed0-b8db-f97b834f7bd3-kube-api-access-l7s9m" (OuterVolumeSpecName: "kube-api-access-l7s9m") pod "765e2de6-f9ec-4ed0-b8db-f97b834f7bd3" (UID: "765e2de6-f9ec-4ed0-b8db-f97b834f7bd3"). InnerVolumeSpecName "kube-api-access-l7s9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:01:51 crc kubenswrapper[4796]: I1205 11:01:51.644059 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/765e2de6-f9ec-4ed0-b8db-f97b834f7bd3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "765e2de6-f9ec-4ed0-b8db-f97b834f7bd3" (UID: "765e2de6-f9ec-4ed0-b8db-f97b834f7bd3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:01:51 crc kubenswrapper[4796]: I1205 11:01:51.670088 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cde0671d-1321-48ae-8fbd-f04694e87bf4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cde0671d-1321-48ae-8fbd-f04694e87bf4" (UID: "cde0671d-1321-48ae-8fbd-f04694e87bf4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:01:51 crc kubenswrapper[4796]: I1205 11:01:51.730990 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7s9m\" (UniqueName: \"kubernetes.io/projected/765e2de6-f9ec-4ed0-b8db-f97b834f7bd3-kube-api-access-l7s9m\") on node \"crc\" DevicePath \"\"" Dec 05 11:01:51 crc kubenswrapper[4796]: I1205 11:01:51.731028 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cde0671d-1321-48ae-8fbd-f04694e87bf4-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 11:01:51 crc kubenswrapper[4796]: I1205 11:01:51.731041 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/765e2de6-f9ec-4ed0-b8db-f97b834f7bd3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 11:01:51 crc kubenswrapper[4796]: I1205 11:01:51.731051 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/765e2de6-f9ec-4ed0-b8db-f97b834f7bd3-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 11:01:51 crc kubenswrapper[4796]: I1205 11:01:51.731062 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cde0671d-1321-48ae-8fbd-f04694e87bf4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 11:01:51 crc kubenswrapper[4796]: I1205 11:01:51.731076 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgcq6\" (UniqueName: \"kubernetes.io/projected/cde0671d-1321-48ae-8fbd-f04694e87bf4-kube-api-access-jgcq6\") on node \"crc\" DevicePath \"\"" Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.102836 4796 generic.go:334] "Generic (PLEG): container finished" podID="765e2de6-f9ec-4ed0-b8db-f97b834f7bd3" containerID="2d5acc75cddc19dc66875ba586c5cc7f63bf02d40845e5885f6635f32e05592d" exitCode=0 Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.102886 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prh7p" event={"ID":"765e2de6-f9ec-4ed0-b8db-f97b834f7bd3","Type":"ContainerDied","Data":"2d5acc75cddc19dc66875ba586c5cc7f63bf02d40845e5885f6635f32e05592d"} Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.103872 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-prh7p" event={"ID":"765e2de6-f9ec-4ed0-b8db-f97b834f7bd3","Type":"ContainerDied","Data":"bdcb6dc2ba7859aa3cbc42d36d0c60474bf0d780b6e471167af82dc997ed4a75"} Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.103906 4796 scope.go:117] "RemoveContainer" containerID="2d5acc75cddc19dc66875ba586c5cc7f63bf02d40845e5885f6635f32e05592d" Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.102963 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-prh7p" Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.106964 4796 generic.go:334] "Generic (PLEG): container finished" podID="cde0671d-1321-48ae-8fbd-f04694e87bf4" containerID="10bb981ab45f90e4a9484cce361513f3d0a103c7325469ebf3f023350f5791a0" exitCode=0 Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.106988 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9x8m" event={"ID":"cde0671d-1321-48ae-8fbd-f04694e87bf4","Type":"ContainerDied","Data":"10bb981ab45f90e4a9484cce361513f3d0a103c7325469ebf3f023350f5791a0"} Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.107010 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s9x8m" event={"ID":"cde0671d-1321-48ae-8fbd-f04694e87bf4","Type":"ContainerDied","Data":"cfeb99789f5366a9c01ab958a74e2317abb35e1758d0a2680ab1ac91b72d0b34"} Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.107115 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s9x8m" Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.138125 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s9x8m"] Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.143611 4796 scope.go:117] "RemoveContainer" containerID="b3059395643f964e314275214c073ca66c756414a3a4cbb76f57f4e329622c93" Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.144989 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s9x8m"] Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.151210 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-prh7p"] Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.156572 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-prh7p"] Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.159567 4796 scope.go:117] "RemoveContainer" containerID="4e0b56487c0891769b900785c75740ae8b2d583aef8fdba8634f334aa546957f" Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.172479 4796 scope.go:117] "RemoveContainer" containerID="2d5acc75cddc19dc66875ba586c5cc7f63bf02d40845e5885f6635f32e05592d" Dec 05 11:01:52 crc kubenswrapper[4796]: E1205 11:01:52.172763 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d5acc75cddc19dc66875ba586c5cc7f63bf02d40845e5885f6635f32e05592d\": container with ID starting with 2d5acc75cddc19dc66875ba586c5cc7f63bf02d40845e5885f6635f32e05592d not found: ID does not exist" containerID="2d5acc75cddc19dc66875ba586c5cc7f63bf02d40845e5885f6635f32e05592d" Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.172801 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d5acc75cddc19dc66875ba586c5cc7f63bf02d40845e5885f6635f32e05592d"} err="failed to get container status \"2d5acc75cddc19dc66875ba586c5cc7f63bf02d40845e5885f6635f32e05592d\": rpc error: code = NotFound desc = could not find container \"2d5acc75cddc19dc66875ba586c5cc7f63bf02d40845e5885f6635f32e05592d\": container with ID starting with 2d5acc75cddc19dc66875ba586c5cc7f63bf02d40845e5885f6635f32e05592d not found: ID does not exist" Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.172833 4796 scope.go:117] "RemoveContainer" containerID="b3059395643f964e314275214c073ca66c756414a3a4cbb76f57f4e329622c93" Dec 05 11:01:52 crc kubenswrapper[4796]: E1205 11:01:52.173075 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3059395643f964e314275214c073ca66c756414a3a4cbb76f57f4e329622c93\": container with ID starting with b3059395643f964e314275214c073ca66c756414a3a4cbb76f57f4e329622c93 not found: ID does not exist" containerID="b3059395643f964e314275214c073ca66c756414a3a4cbb76f57f4e329622c93" Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.173106 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3059395643f964e314275214c073ca66c756414a3a4cbb76f57f4e329622c93"} err="failed to get container status \"b3059395643f964e314275214c073ca66c756414a3a4cbb76f57f4e329622c93\": rpc error: code = NotFound desc = could not find container \"b3059395643f964e314275214c073ca66c756414a3a4cbb76f57f4e329622c93\": container with ID starting with b3059395643f964e314275214c073ca66c756414a3a4cbb76f57f4e329622c93 not found: ID does not exist" Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.173126 4796 scope.go:117] "RemoveContainer" containerID="4e0b56487c0891769b900785c75740ae8b2d583aef8fdba8634f334aa546957f" Dec 05 11:01:52 crc kubenswrapper[4796]: E1205 11:01:52.173323 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e0b56487c0891769b900785c75740ae8b2d583aef8fdba8634f334aa546957f\": container with ID starting with 4e0b56487c0891769b900785c75740ae8b2d583aef8fdba8634f334aa546957f not found: ID does not exist" containerID="4e0b56487c0891769b900785c75740ae8b2d583aef8fdba8634f334aa546957f" Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.173356 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e0b56487c0891769b900785c75740ae8b2d583aef8fdba8634f334aa546957f"} err="failed to get container status \"4e0b56487c0891769b900785c75740ae8b2d583aef8fdba8634f334aa546957f\": rpc error: code = NotFound desc = could not find container \"4e0b56487c0891769b900785c75740ae8b2d583aef8fdba8634f334aa546957f\": container with ID starting with 4e0b56487c0891769b900785c75740ae8b2d583aef8fdba8634f334aa546957f not found: ID does not exist" Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.173369 4796 scope.go:117] "RemoveContainer" containerID="10bb981ab45f90e4a9484cce361513f3d0a103c7325469ebf3f023350f5791a0" Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.186795 4796 scope.go:117] "RemoveContainer" containerID="7ce08d224e2f3f558c49bd9b164cbe3b2709ac17e4ea029b0e756eee0a2d3373" Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.207578 4796 scope.go:117] "RemoveContainer" containerID="e7e24e76e1a172699495237464ecee732024f656a09c28201dbef3a14e49d8e7" Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.227640 4796 scope.go:117] "RemoveContainer" containerID="10bb981ab45f90e4a9484cce361513f3d0a103c7325469ebf3f023350f5791a0" Dec 05 11:01:52 crc kubenswrapper[4796]: E1205 11:01:52.228134 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10bb981ab45f90e4a9484cce361513f3d0a103c7325469ebf3f023350f5791a0\": container with ID starting with 10bb981ab45f90e4a9484cce361513f3d0a103c7325469ebf3f023350f5791a0 not found: ID does not exist" containerID="10bb981ab45f90e4a9484cce361513f3d0a103c7325469ebf3f023350f5791a0" Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.228191 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10bb981ab45f90e4a9484cce361513f3d0a103c7325469ebf3f023350f5791a0"} err="failed to get container status \"10bb981ab45f90e4a9484cce361513f3d0a103c7325469ebf3f023350f5791a0\": rpc error: code = NotFound desc = could not find container \"10bb981ab45f90e4a9484cce361513f3d0a103c7325469ebf3f023350f5791a0\": container with ID starting with 10bb981ab45f90e4a9484cce361513f3d0a103c7325469ebf3f023350f5791a0 not found: ID does not exist" Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.228235 4796 scope.go:117] "RemoveContainer" containerID="7ce08d224e2f3f558c49bd9b164cbe3b2709ac17e4ea029b0e756eee0a2d3373" Dec 05 11:01:52 crc kubenswrapper[4796]: E1205 11:01:52.228587 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ce08d224e2f3f558c49bd9b164cbe3b2709ac17e4ea029b0e756eee0a2d3373\": container with ID starting with 7ce08d224e2f3f558c49bd9b164cbe3b2709ac17e4ea029b0e756eee0a2d3373 not found: ID does not exist" containerID="7ce08d224e2f3f558c49bd9b164cbe3b2709ac17e4ea029b0e756eee0a2d3373" Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.228616 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ce08d224e2f3f558c49bd9b164cbe3b2709ac17e4ea029b0e756eee0a2d3373"} err="failed to get container status \"7ce08d224e2f3f558c49bd9b164cbe3b2709ac17e4ea029b0e756eee0a2d3373\": rpc error: code = NotFound desc = could not find container \"7ce08d224e2f3f558c49bd9b164cbe3b2709ac17e4ea029b0e756eee0a2d3373\": container with ID starting with 7ce08d224e2f3f558c49bd9b164cbe3b2709ac17e4ea029b0e756eee0a2d3373 not found: ID does not exist" Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.228640 4796 scope.go:117] "RemoveContainer" containerID="e7e24e76e1a172699495237464ecee732024f656a09c28201dbef3a14e49d8e7" Dec 05 11:01:52 crc kubenswrapper[4796]: E1205 11:01:52.228996 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7e24e76e1a172699495237464ecee732024f656a09c28201dbef3a14e49d8e7\": container with ID starting with e7e24e76e1a172699495237464ecee732024f656a09c28201dbef3a14e49d8e7 not found: ID does not exist" containerID="e7e24e76e1a172699495237464ecee732024f656a09c28201dbef3a14e49d8e7" Dec 05 11:01:52 crc kubenswrapper[4796]: I1205 11:01:52.229035 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7e24e76e1a172699495237464ecee732024f656a09c28201dbef3a14e49d8e7"} err="failed to get container status \"e7e24e76e1a172699495237464ecee732024f656a09c28201dbef3a14e49d8e7\": rpc error: code = NotFound desc = could not find container \"e7e24e76e1a172699495237464ecee732024f656a09c28201dbef3a14e49d8e7\": container with ID starting with e7e24e76e1a172699495237464ecee732024f656a09c28201dbef3a14e49d8e7 not found: ID does not exist" Dec 05 11:01:54 crc kubenswrapper[4796]: I1205 11:01:54.049537 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="765e2de6-f9ec-4ed0-b8db-f97b834f7bd3" path="/var/lib/kubelet/pods/765e2de6-f9ec-4ed0-b8db-f97b834f7bd3/volumes" Dec 05 11:01:54 crc kubenswrapper[4796]: I1205 11:01:54.050518 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cde0671d-1321-48ae-8fbd-f04694e87bf4" path="/var/lib/kubelet/pods/cde0671d-1321-48ae-8fbd-f04694e87bf4/volumes" Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.005088 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9d974"] Dec 05 11:02:26 crc kubenswrapper[4796]: E1205 11:02:26.006291 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="765e2de6-f9ec-4ed0-b8db-f97b834f7bd3" containerName="registry-server" Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.006305 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="765e2de6-f9ec-4ed0-b8db-f97b834f7bd3" containerName="registry-server" Dec 05 11:02:26 crc kubenswrapper[4796]: E1205 11:02:26.006317 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde0671d-1321-48ae-8fbd-f04694e87bf4" containerName="registry-server" Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.006322 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde0671d-1321-48ae-8fbd-f04694e87bf4" containerName="registry-server" Dec 05 11:02:26 crc kubenswrapper[4796]: E1205 11:02:26.006540 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde0671d-1321-48ae-8fbd-f04694e87bf4" containerName="extract-content" Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.006546 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde0671d-1321-48ae-8fbd-f04694e87bf4" containerName="extract-content" Dec 05 11:02:26 crc kubenswrapper[4796]: E1205 11:02:26.006559 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cde0671d-1321-48ae-8fbd-f04694e87bf4" containerName="extract-utilities" Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.006565 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="cde0671d-1321-48ae-8fbd-f04694e87bf4" containerName="extract-utilities" Dec 05 11:02:26 crc kubenswrapper[4796]: E1205 11:02:26.006583 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="765e2de6-f9ec-4ed0-b8db-f97b834f7bd3" containerName="extract-utilities" Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.006588 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="765e2de6-f9ec-4ed0-b8db-f97b834f7bd3" containerName="extract-utilities" Dec 05 11:02:26 crc kubenswrapper[4796]: E1205 11:02:26.006598 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="765e2de6-f9ec-4ed0-b8db-f97b834f7bd3" containerName="extract-content" Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.006603 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="765e2de6-f9ec-4ed0-b8db-f97b834f7bd3" containerName="extract-content" Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.006788 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="765e2de6-f9ec-4ed0-b8db-f97b834f7bd3" containerName="registry-server" Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.006804 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="cde0671d-1321-48ae-8fbd-f04694e87bf4" containerName="registry-server" Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.008041 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9d974" Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.013815 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9d974"] Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.102037 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a886c80-77c1-47cc-9381-73b8dc64ba31-utilities\") pod \"redhat-operators-9d974\" (UID: \"2a886c80-77c1-47cc-9381-73b8dc64ba31\") " pod="openshift-marketplace/redhat-operators-9d974" Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.102350 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kt72\" (UniqueName: \"kubernetes.io/projected/2a886c80-77c1-47cc-9381-73b8dc64ba31-kube-api-access-8kt72\") pod \"redhat-operators-9d974\" (UID: \"2a886c80-77c1-47cc-9381-73b8dc64ba31\") " pod="openshift-marketplace/redhat-operators-9d974" Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.102393 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a886c80-77c1-47cc-9381-73b8dc64ba31-catalog-content\") pod \"redhat-operators-9d974\" (UID: \"2a886c80-77c1-47cc-9381-73b8dc64ba31\") " pod="openshift-marketplace/redhat-operators-9d974" Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.204900 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a886c80-77c1-47cc-9381-73b8dc64ba31-utilities\") pod \"redhat-operators-9d974\" (UID: \"2a886c80-77c1-47cc-9381-73b8dc64ba31\") " pod="openshift-marketplace/redhat-operators-9d974" Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.204948 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kt72\" (UniqueName: \"kubernetes.io/projected/2a886c80-77c1-47cc-9381-73b8dc64ba31-kube-api-access-8kt72\") pod \"redhat-operators-9d974\" (UID: \"2a886c80-77c1-47cc-9381-73b8dc64ba31\") " pod="openshift-marketplace/redhat-operators-9d974" Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.205046 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a886c80-77c1-47cc-9381-73b8dc64ba31-catalog-content\") pod \"redhat-operators-9d974\" (UID: \"2a886c80-77c1-47cc-9381-73b8dc64ba31\") " pod="openshift-marketplace/redhat-operators-9d974" Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.205497 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a886c80-77c1-47cc-9381-73b8dc64ba31-utilities\") pod \"redhat-operators-9d974\" (UID: \"2a886c80-77c1-47cc-9381-73b8dc64ba31\") " pod="openshift-marketplace/redhat-operators-9d974" Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.205548 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a886c80-77c1-47cc-9381-73b8dc64ba31-catalog-content\") pod \"redhat-operators-9d974\" (UID: \"2a886c80-77c1-47cc-9381-73b8dc64ba31\") " pod="openshift-marketplace/redhat-operators-9d974" Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.223784 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kt72\" (UniqueName: \"kubernetes.io/projected/2a886c80-77c1-47cc-9381-73b8dc64ba31-kube-api-access-8kt72\") pod \"redhat-operators-9d974\" (UID: \"2a886c80-77c1-47cc-9381-73b8dc64ba31\") " pod="openshift-marketplace/redhat-operators-9d974" Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.324322 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9d974" Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.428347 4796 generic.go:334] "Generic (PLEG): container finished" podID="91c465f7-7f18-43b4-9b15-d24ed713432f" containerID="746b3c36e18c05e5625e7a1c50d1e3be064138935ff87fa523e092dac7f578ea" exitCode=0 Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.428391 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" event={"ID":"91c465f7-7f18-43b4-9b15-d24ed713432f","Type":"ContainerDied","Data":"746b3c36e18c05e5625e7a1c50d1e3be064138935ff87fa523e092dac7f578ea"} Dec 05 11:02:26 crc kubenswrapper[4796]: I1205 11:02:26.754608 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9d974"] Dec 05 11:02:27 crc kubenswrapper[4796]: I1205 11:02:27.441604 4796 generic.go:334] "Generic (PLEG): container finished" podID="2a886c80-77c1-47cc-9381-73b8dc64ba31" containerID="a79e5a48a63c8b7f49012957cee3cd6a67a679ddfed072b5e53f88643cf1af17" exitCode=0 Dec 05 11:02:27 crc kubenswrapper[4796]: I1205 11:02:27.441731 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9d974" event={"ID":"2a886c80-77c1-47cc-9381-73b8dc64ba31","Type":"ContainerDied","Data":"a79e5a48a63c8b7f49012957cee3cd6a67a679ddfed072b5e53f88643cf1af17"} Dec 05 11:02:27 crc kubenswrapper[4796]: I1205 11:02:27.441919 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9d974" event={"ID":"2a886c80-77c1-47cc-9381-73b8dc64ba31","Type":"ContainerStarted","Data":"683a4fcea6245877c563d8b93bbe531a9cee84deeab7cce43dcf3e84d31c62ec"} Dec 05 11:02:27 crc kubenswrapper[4796]: I1205 11:02:27.804584 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:02:27 crc kubenswrapper[4796]: I1205 11:02:27.938857 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ssh-key\") pod \"91c465f7-7f18-43b4-9b15-d24ed713432f\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " Dec 05 11:02:27 crc kubenswrapper[4796]: I1205 11:02:27.938919 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ceilometer-compute-config-data-2\") pod \"91c465f7-7f18-43b4-9b15-d24ed713432f\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " Dec 05 11:02:27 crc kubenswrapper[4796]: I1205 11:02:27.939068 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-telemetry-combined-ca-bundle\") pod \"91c465f7-7f18-43b4-9b15-d24ed713432f\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " Dec 05 11:02:27 crc kubenswrapper[4796]: I1205 11:02:27.939102 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-inventory\") pod \"91c465f7-7f18-43b4-9b15-d24ed713432f\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " Dec 05 11:02:27 crc kubenswrapper[4796]: I1205 11:02:27.939134 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqwfn\" (UniqueName: \"kubernetes.io/projected/91c465f7-7f18-43b4-9b15-d24ed713432f-kube-api-access-cqwfn\") pod \"91c465f7-7f18-43b4-9b15-d24ed713432f\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " Dec 05 11:02:27 crc kubenswrapper[4796]: I1205 11:02:27.939166 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ceilometer-compute-config-data-1\") pod \"91c465f7-7f18-43b4-9b15-d24ed713432f\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " Dec 05 11:02:27 crc kubenswrapper[4796]: I1205 11:02:27.939427 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ceilometer-compute-config-data-0\") pod \"91c465f7-7f18-43b4-9b15-d24ed713432f\" (UID: \"91c465f7-7f18-43b4-9b15-d24ed713432f\") " Dec 05 11:02:27 crc kubenswrapper[4796]: I1205 11:02:27.945845 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "91c465f7-7f18-43b4-9b15-d24ed713432f" (UID: "91c465f7-7f18-43b4-9b15-d24ed713432f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 11:02:27 crc kubenswrapper[4796]: I1205 11:02:27.946115 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91c465f7-7f18-43b4-9b15-d24ed713432f-kube-api-access-cqwfn" (OuterVolumeSpecName: "kube-api-access-cqwfn") pod "91c465f7-7f18-43b4-9b15-d24ed713432f" (UID: "91c465f7-7f18-43b4-9b15-d24ed713432f"). InnerVolumeSpecName "kube-api-access-cqwfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:02:27 crc kubenswrapper[4796]: I1205 11:02:27.966618 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "91c465f7-7f18-43b4-9b15-d24ed713432f" (UID: "91c465f7-7f18-43b4-9b15-d24ed713432f"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 11:02:27 crc kubenswrapper[4796]: I1205 11:02:27.967931 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "91c465f7-7f18-43b4-9b15-d24ed713432f" (UID: "91c465f7-7f18-43b4-9b15-d24ed713432f"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 11:02:27 crc kubenswrapper[4796]: I1205 11:02:27.968601 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "91c465f7-7f18-43b4-9b15-d24ed713432f" (UID: "91c465f7-7f18-43b4-9b15-d24ed713432f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 11:02:27 crc kubenswrapper[4796]: I1205 11:02:27.969063 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "91c465f7-7f18-43b4-9b15-d24ed713432f" (UID: "91c465f7-7f18-43b4-9b15-d24ed713432f"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 11:02:27 crc kubenswrapper[4796]: I1205 11:02:27.971746 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-inventory" (OuterVolumeSpecName: "inventory") pod "91c465f7-7f18-43b4-9b15-d24ed713432f" (UID: "91c465f7-7f18-43b4-9b15-d24ed713432f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 11:02:28 crc kubenswrapper[4796]: I1205 11:02:28.042036 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 05 11:02:28 crc kubenswrapper[4796]: I1205 11:02:28.042064 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 11:02:28 crc kubenswrapper[4796]: I1205 11:02:28.042076 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 05 11:02:28 crc kubenswrapper[4796]: I1205 11:02:28.042091 4796 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 05 11:02:28 crc kubenswrapper[4796]: I1205 11:02:28.042148 4796 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-inventory\") on node \"crc\" DevicePath \"\"" Dec 05 11:02:28 crc kubenswrapper[4796]: I1205 11:02:28.042189 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqwfn\" (UniqueName: \"kubernetes.io/projected/91c465f7-7f18-43b4-9b15-d24ed713432f-kube-api-access-cqwfn\") on node \"crc\" DevicePath \"\"" Dec 05 11:02:28 crc kubenswrapper[4796]: I1205 11:02:28.042259 4796 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/91c465f7-7f18-43b4-9b15-d24ed713432f-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 05 11:02:28 crc kubenswrapper[4796]: I1205 11:02:28.457056 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" event={"ID":"91c465f7-7f18-43b4-9b15-d24ed713432f","Type":"ContainerDied","Data":"7d159fcdc8281bdcc80ca611c0d02e146143335103430cbdd73f36cb86313f2b"} Dec 05 11:02:28 crc kubenswrapper[4796]: I1205 11:02:28.457357 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d159fcdc8281bdcc80ca611c0d02e146143335103430cbdd73f36cb86313f2b" Dec 05 11:02:28 crc kubenswrapper[4796]: I1205 11:02:28.457162 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4" Dec 05 11:02:29 crc kubenswrapper[4796]: I1205 11:02:29.467970 4796 generic.go:334] "Generic (PLEG): container finished" podID="2a886c80-77c1-47cc-9381-73b8dc64ba31" containerID="b8d07c21570c01d6713fb7a1ab12421b0bf4e791feed23027245c254e48f9231" exitCode=0 Dec 05 11:02:29 crc kubenswrapper[4796]: I1205 11:02:29.468028 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9d974" event={"ID":"2a886c80-77c1-47cc-9381-73b8dc64ba31","Type":"ContainerDied","Data":"b8d07c21570c01d6713fb7a1ab12421b0bf4e791feed23027245c254e48f9231"} Dec 05 11:02:30 crc kubenswrapper[4796]: I1205 11:02:30.477883 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9d974" event={"ID":"2a886c80-77c1-47cc-9381-73b8dc64ba31","Type":"ContainerStarted","Data":"c1bd9f130e3a6809309edbfa39f6ac36915dd31fd09816ffda64d5a21c5e3083"} Dec 05 11:02:30 crc kubenswrapper[4796]: I1205 11:02:30.497073 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9d974" podStartSLOduration=3.026041137 podStartE2EDuration="5.497055595s" podCreationTimestamp="2025-12-05 11:02:25 +0000 UTC" firstStartedPulling="2025-12-05 11:02:27.443240678 +0000 UTC m=+2093.731346191" lastFinishedPulling="2025-12-05 11:02:29.914255137 +0000 UTC m=+2096.202360649" observedRunningTime="2025-12-05 11:02:30.494710737 +0000 UTC m=+2096.782816251" watchObservedRunningTime="2025-12-05 11:02:30.497055595 +0000 UTC m=+2096.785161108" Dec 05 11:02:36 crc kubenswrapper[4796]: I1205 11:02:36.324532 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9d974" Dec 05 11:02:36 crc kubenswrapper[4796]: I1205 11:02:36.325487 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9d974" Dec 05 11:02:36 crc kubenswrapper[4796]: I1205 11:02:36.369161 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9d974" Dec 05 11:02:36 crc kubenswrapper[4796]: I1205 11:02:36.569655 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9d974" Dec 05 11:02:36 crc kubenswrapper[4796]: I1205 11:02:36.613727 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9d974"] Dec 05 11:02:38 crc kubenswrapper[4796]: I1205 11:02:38.545027 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9d974" podUID="2a886c80-77c1-47cc-9381-73b8dc64ba31" containerName="registry-server" containerID="cri-o://c1bd9f130e3a6809309edbfa39f6ac36915dd31fd09816ffda64d5a21c5e3083" gracePeriod=2 Dec 05 11:02:38 crc kubenswrapper[4796]: I1205 11:02:38.959969 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9d974" Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.006908 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a886c80-77c1-47cc-9381-73b8dc64ba31-catalog-content\") pod \"2a886c80-77c1-47cc-9381-73b8dc64ba31\" (UID: \"2a886c80-77c1-47cc-9381-73b8dc64ba31\") " Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.007409 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kt72\" (UniqueName: \"kubernetes.io/projected/2a886c80-77c1-47cc-9381-73b8dc64ba31-kube-api-access-8kt72\") pod \"2a886c80-77c1-47cc-9381-73b8dc64ba31\" (UID: \"2a886c80-77c1-47cc-9381-73b8dc64ba31\") " Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.007529 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a886c80-77c1-47cc-9381-73b8dc64ba31-utilities\") pod \"2a886c80-77c1-47cc-9381-73b8dc64ba31\" (UID: \"2a886c80-77c1-47cc-9381-73b8dc64ba31\") " Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.008476 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a886c80-77c1-47cc-9381-73b8dc64ba31-utilities" (OuterVolumeSpecName: "utilities") pod "2a886c80-77c1-47cc-9381-73b8dc64ba31" (UID: "2a886c80-77c1-47cc-9381-73b8dc64ba31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.015573 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a886c80-77c1-47cc-9381-73b8dc64ba31-kube-api-access-8kt72" (OuterVolumeSpecName: "kube-api-access-8kt72") pod "2a886c80-77c1-47cc-9381-73b8dc64ba31" (UID: "2a886c80-77c1-47cc-9381-73b8dc64ba31"). InnerVolumeSpecName "kube-api-access-8kt72". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.091517 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a886c80-77c1-47cc-9381-73b8dc64ba31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a886c80-77c1-47cc-9381-73b8dc64ba31" (UID: "2a886c80-77c1-47cc-9381-73b8dc64ba31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.111302 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a886c80-77c1-47cc-9381-73b8dc64ba31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.111338 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kt72\" (UniqueName: \"kubernetes.io/projected/2a886c80-77c1-47cc-9381-73b8dc64ba31-kube-api-access-8kt72\") on node \"crc\" DevicePath \"\"" Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.111351 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a886c80-77c1-47cc-9381-73b8dc64ba31-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.559128 4796 generic.go:334] "Generic (PLEG): container finished" podID="2a886c80-77c1-47cc-9381-73b8dc64ba31" containerID="c1bd9f130e3a6809309edbfa39f6ac36915dd31fd09816ffda64d5a21c5e3083" exitCode=0 Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.559245 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9d974" Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.559244 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9d974" event={"ID":"2a886c80-77c1-47cc-9381-73b8dc64ba31","Type":"ContainerDied","Data":"c1bd9f130e3a6809309edbfa39f6ac36915dd31fd09816ffda64d5a21c5e3083"} Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.560408 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9d974" event={"ID":"2a886c80-77c1-47cc-9381-73b8dc64ba31","Type":"ContainerDied","Data":"683a4fcea6245877c563d8b93bbe531a9cee84deeab7cce43dcf3e84d31c62ec"} Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.560439 4796 scope.go:117] "RemoveContainer" containerID="c1bd9f130e3a6809309edbfa39f6ac36915dd31fd09816ffda64d5a21c5e3083" Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.580609 4796 scope.go:117] "RemoveContainer" containerID="b8d07c21570c01d6713fb7a1ab12421b0bf4e791feed23027245c254e48f9231" Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.595393 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9d974"] Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.604297 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9d974"] Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.626108 4796 scope.go:117] "RemoveContainer" containerID="a79e5a48a63c8b7f49012957cee3cd6a67a679ddfed072b5e53f88643cf1af17" Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.644222 4796 scope.go:117] "RemoveContainer" containerID="c1bd9f130e3a6809309edbfa39f6ac36915dd31fd09816ffda64d5a21c5e3083" Dec 05 11:02:39 crc kubenswrapper[4796]: E1205 11:02:39.644640 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1bd9f130e3a6809309edbfa39f6ac36915dd31fd09816ffda64d5a21c5e3083\": container with ID starting with c1bd9f130e3a6809309edbfa39f6ac36915dd31fd09816ffda64d5a21c5e3083 not found: ID does not exist" containerID="c1bd9f130e3a6809309edbfa39f6ac36915dd31fd09816ffda64d5a21c5e3083" Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.644704 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1bd9f130e3a6809309edbfa39f6ac36915dd31fd09816ffda64d5a21c5e3083"} err="failed to get container status \"c1bd9f130e3a6809309edbfa39f6ac36915dd31fd09816ffda64d5a21c5e3083\": rpc error: code = NotFound desc = could not find container \"c1bd9f130e3a6809309edbfa39f6ac36915dd31fd09816ffda64d5a21c5e3083\": container with ID starting with c1bd9f130e3a6809309edbfa39f6ac36915dd31fd09816ffda64d5a21c5e3083 not found: ID does not exist" Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.644735 4796 scope.go:117] "RemoveContainer" containerID="b8d07c21570c01d6713fb7a1ab12421b0bf4e791feed23027245c254e48f9231" Dec 05 11:02:39 crc kubenswrapper[4796]: E1205 11:02:39.645052 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8d07c21570c01d6713fb7a1ab12421b0bf4e791feed23027245c254e48f9231\": container with ID starting with b8d07c21570c01d6713fb7a1ab12421b0bf4e791feed23027245c254e48f9231 not found: ID does not exist" containerID="b8d07c21570c01d6713fb7a1ab12421b0bf4e791feed23027245c254e48f9231" Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.645086 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8d07c21570c01d6713fb7a1ab12421b0bf4e791feed23027245c254e48f9231"} err="failed to get container status \"b8d07c21570c01d6713fb7a1ab12421b0bf4e791feed23027245c254e48f9231\": rpc error: code = NotFound desc = could not find container \"b8d07c21570c01d6713fb7a1ab12421b0bf4e791feed23027245c254e48f9231\": container with ID starting with b8d07c21570c01d6713fb7a1ab12421b0bf4e791feed23027245c254e48f9231 not found: ID does not exist" Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.645108 4796 scope.go:117] "RemoveContainer" containerID="a79e5a48a63c8b7f49012957cee3cd6a67a679ddfed072b5e53f88643cf1af17" Dec 05 11:02:39 crc kubenswrapper[4796]: E1205 11:02:39.645373 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a79e5a48a63c8b7f49012957cee3cd6a67a679ddfed072b5e53f88643cf1af17\": container with ID starting with a79e5a48a63c8b7f49012957cee3cd6a67a679ddfed072b5e53f88643cf1af17 not found: ID does not exist" containerID="a79e5a48a63c8b7f49012957cee3cd6a67a679ddfed072b5e53f88643cf1af17" Dec 05 11:02:39 crc kubenswrapper[4796]: I1205 11:02:39.645406 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a79e5a48a63c8b7f49012957cee3cd6a67a679ddfed072b5e53f88643cf1af17"} err="failed to get container status \"a79e5a48a63c8b7f49012957cee3cd6a67a679ddfed072b5e53f88643cf1af17\": rpc error: code = NotFound desc = could not find container \"a79e5a48a63c8b7f49012957cee3cd6a67a679ddfed072b5e53f88643cf1af17\": container with ID starting with a79e5a48a63c8b7f49012957cee3cd6a67a679ddfed072b5e53f88643cf1af17 not found: ID does not exist" Dec 05 11:02:40 crc kubenswrapper[4796]: I1205 11:02:40.043718 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a886c80-77c1-47cc-9381-73b8dc64ba31" path="/var/lib/kubelet/pods/2a886c80-77c1-47cc-9381-73b8dc64ba31/volumes" Dec 05 11:03:05 crc kubenswrapper[4796]: I1205 11:03:05.177777 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 11:03:05 crc kubenswrapper[4796]: I1205 11:03:05.179477 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.811643 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 05 11:03:23 crc kubenswrapper[4796]: E1205 11:03:23.813361 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91c465f7-7f18-43b4-9b15-d24ed713432f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.813391 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="91c465f7-7f18-43b4-9b15-d24ed713432f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 05 11:03:23 crc kubenswrapper[4796]: E1205 11:03:23.813411 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a886c80-77c1-47cc-9381-73b8dc64ba31" containerName="extract-content" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.813418 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a886c80-77c1-47cc-9381-73b8dc64ba31" containerName="extract-content" Dec 05 11:03:23 crc kubenswrapper[4796]: E1205 11:03:23.813444 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a886c80-77c1-47cc-9381-73b8dc64ba31" containerName="registry-server" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.813454 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a886c80-77c1-47cc-9381-73b8dc64ba31" containerName="registry-server" Dec 05 11:03:23 crc kubenswrapper[4796]: E1205 11:03:23.813509 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a886c80-77c1-47cc-9381-73b8dc64ba31" containerName="extract-utilities" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.813520 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a886c80-77c1-47cc-9381-73b8dc64ba31" containerName="extract-utilities" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.813798 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a886c80-77c1-47cc-9381-73b8dc64ba31" containerName="registry-server" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.813815 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="91c465f7-7f18-43b4-9b15-d24ed713432f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.814913 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.817852 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.817871 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-f4wwn" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.818081 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.819238 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.822788 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.851471 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.852316 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.852431 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-config-data\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.954733 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.954805 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.954857 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.955605 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.955653 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl75s\" (UniqueName: \"kubernetes.io/projected/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-kube-api-access-fl75s\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.955705 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.955908 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.956047 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-config-data\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.956249 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.957357 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.957438 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-config-data\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:23 crc kubenswrapper[4796]: I1205 11:03:23.964031 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:24 crc kubenswrapper[4796]: I1205 11:03:24.058344 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:24 crc kubenswrapper[4796]: I1205 11:03:24.058418 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:24 crc kubenswrapper[4796]: I1205 11:03:24.058457 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:24 crc kubenswrapper[4796]: I1205 11:03:24.058507 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:24 crc kubenswrapper[4796]: I1205 11:03:24.058541 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl75s\" (UniqueName: \"kubernetes.io/projected/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-kube-api-access-fl75s\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:24 crc kubenswrapper[4796]: I1205 11:03:24.058585 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:24 crc kubenswrapper[4796]: I1205 11:03:24.059162 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest" Dec 05 11:03:24 crc kubenswrapper[4796]: I1205 11:03:24.059347 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:24 crc kubenswrapper[4796]: I1205 11:03:24.059828 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:24 crc kubenswrapper[4796]: I1205 11:03:24.062748 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:24 crc kubenswrapper[4796]: I1205 11:03:24.063907 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:24 crc kubenswrapper[4796]: I1205 11:03:24.076661 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl75s\" (UniqueName: \"kubernetes.io/projected/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-kube-api-access-fl75s\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:24 crc kubenswrapper[4796]: I1205 11:03:24.084518 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " pod="openstack/tempest-tests-tempest" Dec 05 11:03:24 crc kubenswrapper[4796]: I1205 11:03:24.134720 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 05 11:03:24 crc kubenswrapper[4796]: I1205 11:03:24.526023 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 05 11:03:24 crc kubenswrapper[4796]: I1205 11:03:24.993666 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b3a6fb02-a525-41cb-96f5-ad01c2999e4d","Type":"ContainerStarted","Data":"51700bc5713bd90d8a0529b48cbeee4d3bd11792590be149ac5a482b961c1f5f"} Dec 05 11:03:35 crc kubenswrapper[4796]: I1205 11:03:35.177904 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 11:03:35 crc kubenswrapper[4796]: I1205 11:03:35.178605 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 11:03:49 crc kubenswrapper[4796]: E1205 11:03:49.143239 4796 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Dec 05 11:03:49 crc kubenswrapper[4796]: E1205 11:03:49.144073 4796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fl75s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(b3a6fb02-a525-41cb-96f5-ad01c2999e4d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 05 11:03:49 crc kubenswrapper[4796]: E1205 11:03:49.145352 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="b3a6fb02-a525-41cb-96f5-ad01c2999e4d" Dec 05 11:03:49 crc kubenswrapper[4796]: E1205 11:03:49.262440 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="b3a6fb02-a525-41cb-96f5-ad01c2999e4d" Dec 05 11:04:00 crc kubenswrapper[4796]: I1205 11:04:00.562649 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 05 11:04:01 crc kubenswrapper[4796]: I1205 11:04:01.385995 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b3a6fb02-a525-41cb-96f5-ad01c2999e4d","Type":"ContainerStarted","Data":"8578812c55dc487fcd8f20be8da790a3c02693e580e6d0d014995cefcb868764"} Dec 05 11:04:01 crc kubenswrapper[4796]: I1205 11:04:01.412232 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.38525624 podStartE2EDuration="39.412206253s" podCreationTimestamp="2025-12-05 11:03:22 +0000 UTC" firstStartedPulling="2025-12-05 11:03:24.532312428 +0000 UTC m=+2150.820417941" lastFinishedPulling="2025-12-05 11:04:00.559262442 +0000 UTC m=+2186.847367954" observedRunningTime="2025-12-05 11:04:01.40585208 +0000 UTC m=+2187.693957604" watchObservedRunningTime="2025-12-05 11:04:01.412206253 +0000 UTC m=+2187.700311765" Dec 05 11:04:05 crc kubenswrapper[4796]: I1205 11:04:05.177021 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 11:04:05 crc kubenswrapper[4796]: I1205 11:04:05.177721 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 11:04:05 crc kubenswrapper[4796]: I1205 11:04:05.177778 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 11:04:05 crc kubenswrapper[4796]: I1205 11:04:05.178444 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377"} pod="openshift-machine-config-operator/machine-config-daemon-9pllw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 11:04:05 crc kubenswrapper[4796]: I1205 11:04:05.178500 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" containerID="cri-o://9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" gracePeriod=600 Dec 05 11:04:05 crc kubenswrapper[4796]: E1205 11:04:05.299671 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:04:05 crc kubenswrapper[4796]: I1205 11:04:05.432895 4796 generic.go:334] "Generic (PLEG): container finished" podID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" exitCode=0 Dec 05 11:04:05 crc kubenswrapper[4796]: I1205 11:04:05.432969 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerDied","Data":"9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377"} Dec 05 11:04:05 crc kubenswrapper[4796]: I1205 11:04:05.433068 4796 scope.go:117] "RemoveContainer" containerID="dcb01733a1303955cc7132d26ec26ddb1901af7c36f36800bac1df8150042858" Dec 05 11:04:05 crc kubenswrapper[4796]: I1205 11:04:05.434027 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:04:05 crc kubenswrapper[4796]: E1205 11:04:05.434406 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:04:18 crc kubenswrapper[4796]: I1205 11:04:18.032529 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:04:18 crc kubenswrapper[4796]: E1205 11:04:18.033477 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:04:30 crc kubenswrapper[4796]: I1205 11:04:30.032255 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:04:30 crc kubenswrapper[4796]: E1205 11:04:30.033217 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:04:45 crc kubenswrapper[4796]: I1205 11:04:45.031501 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:04:45 crc kubenswrapper[4796]: E1205 11:04:45.032959 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:04:57 crc kubenswrapper[4796]: I1205 11:04:57.031680 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:04:57 crc kubenswrapper[4796]: E1205 11:04:57.033902 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:05:10 crc kubenswrapper[4796]: I1205 11:05:10.031074 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:05:10 crc kubenswrapper[4796]: E1205 11:05:10.032170 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:05:21 crc kubenswrapper[4796]: I1205 11:05:21.031452 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:05:21 crc kubenswrapper[4796]: E1205 11:05:21.032156 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:05:36 crc kubenswrapper[4796]: I1205 11:05:36.031864 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:05:36 crc kubenswrapper[4796]: E1205 11:05:36.032877 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:05:48 crc kubenswrapper[4796]: I1205 11:05:48.031959 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:05:48 crc kubenswrapper[4796]: E1205 11:05:48.032910 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:06:02 crc kubenswrapper[4796]: I1205 11:06:02.031485 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:06:02 crc kubenswrapper[4796]: E1205 11:06:02.032449 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:06:16 crc kubenswrapper[4796]: I1205 11:06:16.031193 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:06:16 crc kubenswrapper[4796]: E1205 11:06:16.032325 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:06:29 crc kubenswrapper[4796]: I1205 11:06:29.030780 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:06:29 crc kubenswrapper[4796]: E1205 11:06:29.031737 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:06:44 crc kubenswrapper[4796]: I1205 11:06:44.038770 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:06:44 crc kubenswrapper[4796]: E1205 11:06:44.040070 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:06:57 crc kubenswrapper[4796]: I1205 11:06:57.031529 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:06:57 crc kubenswrapper[4796]: E1205 11:06:57.032507 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:07:08 crc kubenswrapper[4796]: I1205 11:07:08.031836 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:07:08 crc kubenswrapper[4796]: E1205 11:07:08.032723 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:07:21 crc kubenswrapper[4796]: I1205 11:07:21.030991 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:07:21 crc kubenswrapper[4796]: E1205 11:07:21.031799 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:07:35 crc kubenswrapper[4796]: I1205 11:07:35.031617 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:07:35 crc kubenswrapper[4796]: E1205 11:07:35.032608 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:07:48 crc kubenswrapper[4796]: I1205 11:07:48.032101 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:07:48 crc kubenswrapper[4796]: E1205 11:07:48.033093 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:08:02 crc kubenswrapper[4796]: I1205 11:08:02.032292 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:08:02 crc kubenswrapper[4796]: E1205 11:08:02.033288 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:08:15 crc kubenswrapper[4796]: I1205 11:08:15.031812 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:08:15 crc kubenswrapper[4796]: E1205 11:08:15.032858 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:08:27 crc kubenswrapper[4796]: I1205 11:08:27.031148 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:08:27 crc kubenswrapper[4796]: E1205 11:08:27.032052 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:08:39 crc kubenswrapper[4796]: I1205 11:08:39.031817 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:08:39 crc kubenswrapper[4796]: E1205 11:08:39.032910 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:08:51 crc kubenswrapper[4796]: I1205 11:08:51.031483 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:08:51 crc kubenswrapper[4796]: E1205 11:08:51.032269 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:09:03 crc kubenswrapper[4796]: I1205 11:09:03.031391 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:09:03 crc kubenswrapper[4796]: E1205 11:09:03.032298 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:09:14 crc kubenswrapper[4796]: I1205 11:09:14.037068 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:09:15 crc kubenswrapper[4796]: I1205 11:09:15.082410 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerStarted","Data":"38444dbc2e54bd632b755d0a32520bd65ff9b96410767c08f22e96e697037a59"} Dec 05 11:10:24 crc kubenswrapper[4796]: I1205 11:10:24.801928 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n8kb5"] Dec 05 11:10:24 crc kubenswrapper[4796]: I1205 11:10:24.805331 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n8kb5" Dec 05 11:10:24 crc kubenswrapper[4796]: I1205 11:10:24.813959 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n8kb5"] Dec 05 11:10:24 crc kubenswrapper[4796]: I1205 11:10:24.820192 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8511b192-d979-42cb-9ac1-af297712e6e0-utilities\") pod \"certified-operators-n8kb5\" (UID: \"8511b192-d979-42cb-9ac1-af297712e6e0\") " pod="openshift-marketplace/certified-operators-n8kb5" Dec 05 11:10:24 crc kubenswrapper[4796]: I1205 11:10:24.820459 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8511b192-d979-42cb-9ac1-af297712e6e0-catalog-content\") pod \"certified-operators-n8kb5\" (UID: \"8511b192-d979-42cb-9ac1-af297712e6e0\") " pod="openshift-marketplace/certified-operators-n8kb5" Dec 05 11:10:24 crc kubenswrapper[4796]: I1205 11:10:24.820491 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lrqz\" (UniqueName: \"kubernetes.io/projected/8511b192-d979-42cb-9ac1-af297712e6e0-kube-api-access-7lrqz\") pod \"certified-operators-n8kb5\" (UID: \"8511b192-d979-42cb-9ac1-af297712e6e0\") " pod="openshift-marketplace/certified-operators-n8kb5" Dec 05 11:10:24 crc kubenswrapper[4796]: I1205 11:10:24.922036 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8511b192-d979-42cb-9ac1-af297712e6e0-catalog-content\") pod \"certified-operators-n8kb5\" (UID: \"8511b192-d979-42cb-9ac1-af297712e6e0\") " pod="openshift-marketplace/certified-operators-n8kb5" Dec 05 11:10:24 crc kubenswrapper[4796]: I1205 11:10:24.922071 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lrqz\" (UniqueName: \"kubernetes.io/projected/8511b192-d979-42cb-9ac1-af297712e6e0-kube-api-access-7lrqz\") pod \"certified-operators-n8kb5\" (UID: \"8511b192-d979-42cb-9ac1-af297712e6e0\") " pod="openshift-marketplace/certified-operators-n8kb5" Dec 05 11:10:24 crc kubenswrapper[4796]: I1205 11:10:24.922163 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8511b192-d979-42cb-9ac1-af297712e6e0-utilities\") pod \"certified-operators-n8kb5\" (UID: \"8511b192-d979-42cb-9ac1-af297712e6e0\") " pod="openshift-marketplace/certified-operators-n8kb5" Dec 05 11:10:24 crc kubenswrapper[4796]: I1205 11:10:24.922590 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8511b192-d979-42cb-9ac1-af297712e6e0-utilities\") pod \"certified-operators-n8kb5\" (UID: \"8511b192-d979-42cb-9ac1-af297712e6e0\") " pod="openshift-marketplace/certified-operators-n8kb5" Dec 05 11:10:24 crc kubenswrapper[4796]: I1205 11:10:24.922874 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8511b192-d979-42cb-9ac1-af297712e6e0-catalog-content\") pod \"certified-operators-n8kb5\" (UID: \"8511b192-d979-42cb-9ac1-af297712e6e0\") " pod="openshift-marketplace/certified-operators-n8kb5" Dec 05 11:10:24 crc kubenswrapper[4796]: I1205 11:10:24.940541 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lrqz\" (UniqueName: \"kubernetes.io/projected/8511b192-d979-42cb-9ac1-af297712e6e0-kube-api-access-7lrqz\") pod \"certified-operators-n8kb5\" (UID: \"8511b192-d979-42cb-9ac1-af297712e6e0\") " pod="openshift-marketplace/certified-operators-n8kb5" Dec 05 11:10:25 crc kubenswrapper[4796]: I1205 11:10:25.130765 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n8kb5" Dec 05 11:10:25 crc kubenswrapper[4796]: I1205 11:10:25.633617 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n8kb5"] Dec 05 11:10:25 crc kubenswrapper[4796]: I1205 11:10:25.652749 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8kb5" event={"ID":"8511b192-d979-42cb-9ac1-af297712e6e0","Type":"ContainerStarted","Data":"ea8c119089c91884604476e193068d4f3706fee53eb01a2e6e4b02dfa054b999"} Dec 05 11:10:26 crc kubenswrapper[4796]: I1205 11:10:26.663941 4796 generic.go:334] "Generic (PLEG): container finished" podID="8511b192-d979-42cb-9ac1-af297712e6e0" containerID="d693795062fb3647b8d8547161d4bfa68461ba7e6c54bd6e496e5673989ed8c8" exitCode=0 Dec 05 11:10:26 crc kubenswrapper[4796]: I1205 11:10:26.664052 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8kb5" event={"ID":"8511b192-d979-42cb-9ac1-af297712e6e0","Type":"ContainerDied","Data":"d693795062fb3647b8d8547161d4bfa68461ba7e6c54bd6e496e5673989ed8c8"} Dec 05 11:10:26 crc kubenswrapper[4796]: I1205 11:10:26.666579 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 11:10:28 crc kubenswrapper[4796]: I1205 11:10:28.681594 4796 generic.go:334] "Generic (PLEG): container finished" podID="8511b192-d979-42cb-9ac1-af297712e6e0" containerID="f6d67c37428647f03208dab29b095dd8fb1209591eccca7acc38043f574532dc" exitCode=0 Dec 05 11:10:28 crc kubenswrapper[4796]: I1205 11:10:28.681659 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8kb5" event={"ID":"8511b192-d979-42cb-9ac1-af297712e6e0","Type":"ContainerDied","Data":"f6d67c37428647f03208dab29b095dd8fb1209591eccca7acc38043f574532dc"} Dec 05 11:10:29 crc kubenswrapper[4796]: I1205 11:10:29.692843 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8kb5" event={"ID":"8511b192-d979-42cb-9ac1-af297712e6e0","Type":"ContainerStarted","Data":"3dcd261a92b5fc11b5d83bd9ce91185520465a83cf0e2eab6ad703179f78489f"} Dec 05 11:10:29 crc kubenswrapper[4796]: I1205 11:10:29.713248 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n8kb5" podStartSLOduration=3.217583236 podStartE2EDuration="5.713214977s" podCreationTimestamp="2025-12-05 11:10:24 +0000 UTC" firstStartedPulling="2025-12-05 11:10:26.666298216 +0000 UTC m=+2572.954403730" lastFinishedPulling="2025-12-05 11:10:29.161929959 +0000 UTC m=+2575.450035471" observedRunningTime="2025-12-05 11:10:29.712724345 +0000 UTC m=+2576.000829858" watchObservedRunningTime="2025-12-05 11:10:29.713214977 +0000 UTC m=+2576.001320490" Dec 05 11:10:35 crc kubenswrapper[4796]: I1205 11:10:35.131654 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n8kb5" Dec 05 11:10:35 crc kubenswrapper[4796]: I1205 11:10:35.132267 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n8kb5" Dec 05 11:10:35 crc kubenswrapper[4796]: I1205 11:10:35.170015 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n8kb5" Dec 05 11:10:35 crc kubenswrapper[4796]: I1205 11:10:35.777898 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n8kb5" Dec 05 11:10:35 crc kubenswrapper[4796]: I1205 11:10:35.818464 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n8kb5"] Dec 05 11:10:37 crc kubenswrapper[4796]: I1205 11:10:37.757143 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n8kb5" podUID="8511b192-d979-42cb-9ac1-af297712e6e0" containerName="registry-server" containerID="cri-o://3dcd261a92b5fc11b5d83bd9ce91185520465a83cf0e2eab6ad703179f78489f" gracePeriod=2 Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.194326 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n8kb5" Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.396722 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lrqz\" (UniqueName: \"kubernetes.io/projected/8511b192-d979-42cb-9ac1-af297712e6e0-kube-api-access-7lrqz\") pod \"8511b192-d979-42cb-9ac1-af297712e6e0\" (UID: \"8511b192-d979-42cb-9ac1-af297712e6e0\") " Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.396809 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8511b192-d979-42cb-9ac1-af297712e6e0-catalog-content\") pod \"8511b192-d979-42cb-9ac1-af297712e6e0\" (UID: \"8511b192-d979-42cb-9ac1-af297712e6e0\") " Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.397094 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8511b192-d979-42cb-9ac1-af297712e6e0-utilities\") pod \"8511b192-d979-42cb-9ac1-af297712e6e0\" (UID: \"8511b192-d979-42cb-9ac1-af297712e6e0\") " Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.397828 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8511b192-d979-42cb-9ac1-af297712e6e0-utilities" (OuterVolumeSpecName: "utilities") pod "8511b192-d979-42cb-9ac1-af297712e6e0" (UID: "8511b192-d979-42cb-9ac1-af297712e6e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.398918 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8511b192-d979-42cb-9ac1-af297712e6e0-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.402798 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8511b192-d979-42cb-9ac1-af297712e6e0-kube-api-access-7lrqz" (OuterVolumeSpecName: "kube-api-access-7lrqz") pod "8511b192-d979-42cb-9ac1-af297712e6e0" (UID: "8511b192-d979-42cb-9ac1-af297712e6e0"). InnerVolumeSpecName "kube-api-access-7lrqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.432083 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8511b192-d979-42cb-9ac1-af297712e6e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8511b192-d979-42cb-9ac1-af297712e6e0" (UID: "8511b192-d979-42cb-9ac1-af297712e6e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.500186 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lrqz\" (UniqueName: \"kubernetes.io/projected/8511b192-d979-42cb-9ac1-af297712e6e0-kube-api-access-7lrqz\") on node \"crc\" DevicePath \"\"" Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.500215 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8511b192-d979-42cb-9ac1-af297712e6e0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.768563 4796 generic.go:334] "Generic (PLEG): container finished" podID="8511b192-d979-42cb-9ac1-af297712e6e0" containerID="3dcd261a92b5fc11b5d83bd9ce91185520465a83cf0e2eab6ad703179f78489f" exitCode=0 Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.768624 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8kb5" event={"ID":"8511b192-d979-42cb-9ac1-af297712e6e0","Type":"ContainerDied","Data":"3dcd261a92b5fc11b5d83bd9ce91185520465a83cf0e2eab6ad703179f78489f"} Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.769498 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8kb5" event={"ID":"8511b192-d979-42cb-9ac1-af297712e6e0","Type":"ContainerDied","Data":"ea8c119089c91884604476e193068d4f3706fee53eb01a2e6e4b02dfa054b999"} Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.769533 4796 scope.go:117] "RemoveContainer" containerID="3dcd261a92b5fc11b5d83bd9ce91185520465a83cf0e2eab6ad703179f78489f" Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.768642 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n8kb5" Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.792775 4796 scope.go:117] "RemoveContainer" containerID="f6d67c37428647f03208dab29b095dd8fb1209591eccca7acc38043f574532dc" Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.799992 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n8kb5"] Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.806403 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n8kb5"] Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.823397 4796 scope.go:117] "RemoveContainer" containerID="d693795062fb3647b8d8547161d4bfa68461ba7e6c54bd6e496e5673989ed8c8" Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.854111 4796 scope.go:117] "RemoveContainer" containerID="3dcd261a92b5fc11b5d83bd9ce91185520465a83cf0e2eab6ad703179f78489f" Dec 05 11:10:38 crc kubenswrapper[4796]: E1205 11:10:38.854628 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dcd261a92b5fc11b5d83bd9ce91185520465a83cf0e2eab6ad703179f78489f\": container with ID starting with 3dcd261a92b5fc11b5d83bd9ce91185520465a83cf0e2eab6ad703179f78489f not found: ID does not exist" containerID="3dcd261a92b5fc11b5d83bd9ce91185520465a83cf0e2eab6ad703179f78489f" Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.854669 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dcd261a92b5fc11b5d83bd9ce91185520465a83cf0e2eab6ad703179f78489f"} err="failed to get container status \"3dcd261a92b5fc11b5d83bd9ce91185520465a83cf0e2eab6ad703179f78489f\": rpc error: code = NotFound desc = could not find container \"3dcd261a92b5fc11b5d83bd9ce91185520465a83cf0e2eab6ad703179f78489f\": container with ID starting with 3dcd261a92b5fc11b5d83bd9ce91185520465a83cf0e2eab6ad703179f78489f not found: ID does not exist" Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.854716 4796 scope.go:117] "RemoveContainer" containerID="f6d67c37428647f03208dab29b095dd8fb1209591eccca7acc38043f574532dc" Dec 05 11:10:38 crc kubenswrapper[4796]: E1205 11:10:38.855160 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6d67c37428647f03208dab29b095dd8fb1209591eccca7acc38043f574532dc\": container with ID starting with f6d67c37428647f03208dab29b095dd8fb1209591eccca7acc38043f574532dc not found: ID does not exist" containerID="f6d67c37428647f03208dab29b095dd8fb1209591eccca7acc38043f574532dc" Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.855194 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6d67c37428647f03208dab29b095dd8fb1209591eccca7acc38043f574532dc"} err="failed to get container status \"f6d67c37428647f03208dab29b095dd8fb1209591eccca7acc38043f574532dc\": rpc error: code = NotFound desc = could not find container \"f6d67c37428647f03208dab29b095dd8fb1209591eccca7acc38043f574532dc\": container with ID starting with f6d67c37428647f03208dab29b095dd8fb1209591eccca7acc38043f574532dc not found: ID does not exist" Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.855216 4796 scope.go:117] "RemoveContainer" containerID="d693795062fb3647b8d8547161d4bfa68461ba7e6c54bd6e496e5673989ed8c8" Dec 05 11:10:38 crc kubenswrapper[4796]: E1205 11:10:38.855510 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d693795062fb3647b8d8547161d4bfa68461ba7e6c54bd6e496e5673989ed8c8\": container with ID starting with d693795062fb3647b8d8547161d4bfa68461ba7e6c54bd6e496e5673989ed8c8 not found: ID does not exist" containerID="d693795062fb3647b8d8547161d4bfa68461ba7e6c54bd6e496e5673989ed8c8" Dec 05 11:10:38 crc kubenswrapper[4796]: I1205 11:10:38.855553 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d693795062fb3647b8d8547161d4bfa68461ba7e6c54bd6e496e5673989ed8c8"} err="failed to get container status \"d693795062fb3647b8d8547161d4bfa68461ba7e6c54bd6e496e5673989ed8c8\": rpc error: code = NotFound desc = could not find container \"d693795062fb3647b8d8547161d4bfa68461ba7e6c54bd6e496e5673989ed8c8\": container with ID starting with d693795062fb3647b8d8547161d4bfa68461ba7e6c54bd6e496e5673989ed8c8 not found: ID does not exist" Dec 05 11:10:40 crc kubenswrapper[4796]: I1205 11:10:40.040211 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8511b192-d979-42cb-9ac1-af297712e6e0" path="/var/lib/kubelet/pods/8511b192-d979-42cb-9ac1-af297712e6e0/volumes" Dec 05 11:11:35 crc kubenswrapper[4796]: I1205 11:11:35.177411 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 11:11:35 crc kubenswrapper[4796]: I1205 11:11:35.178048 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 11:12:05 crc kubenswrapper[4796]: I1205 11:12:05.177648 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 11:12:05 crc kubenswrapper[4796]: I1205 11:12:05.178376 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 11:12:13 crc kubenswrapper[4796]: I1205 11:12:13.936708 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mdnfr"] Dec 05 11:12:13 crc kubenswrapper[4796]: E1205 11:12:13.938462 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8511b192-d979-42cb-9ac1-af297712e6e0" containerName="extract-content" Dec 05 11:12:13 crc kubenswrapper[4796]: I1205 11:12:13.938483 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8511b192-d979-42cb-9ac1-af297712e6e0" containerName="extract-content" Dec 05 11:12:13 crc kubenswrapper[4796]: E1205 11:12:13.938510 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8511b192-d979-42cb-9ac1-af297712e6e0" containerName="registry-server" Dec 05 11:12:13 crc kubenswrapper[4796]: I1205 11:12:13.938520 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8511b192-d979-42cb-9ac1-af297712e6e0" containerName="registry-server" Dec 05 11:12:13 crc kubenswrapper[4796]: E1205 11:12:13.938545 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8511b192-d979-42cb-9ac1-af297712e6e0" containerName="extract-utilities" Dec 05 11:12:13 crc kubenswrapper[4796]: I1205 11:12:13.938553 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="8511b192-d979-42cb-9ac1-af297712e6e0" containerName="extract-utilities" Dec 05 11:12:13 crc kubenswrapper[4796]: I1205 11:12:13.938983 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="8511b192-d979-42cb-9ac1-af297712e6e0" containerName="registry-server" Dec 05 11:12:13 crc kubenswrapper[4796]: I1205 11:12:13.941618 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mdnfr" Dec 05 11:12:13 crc kubenswrapper[4796]: I1205 11:12:13.968605 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mdnfr"] Dec 05 11:12:14 crc kubenswrapper[4796]: I1205 11:12:14.049382 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dafdd10-d103-4367-a524-832b689426da-utilities\") pod \"redhat-marketplace-mdnfr\" (UID: \"7dafdd10-d103-4367-a524-832b689426da\") " pod="openshift-marketplace/redhat-marketplace-mdnfr" Dec 05 11:12:14 crc kubenswrapper[4796]: I1205 11:12:14.049437 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dafdd10-d103-4367-a524-832b689426da-catalog-content\") pod \"redhat-marketplace-mdnfr\" (UID: \"7dafdd10-d103-4367-a524-832b689426da\") " pod="openshift-marketplace/redhat-marketplace-mdnfr" Dec 05 11:12:14 crc kubenswrapper[4796]: I1205 11:12:14.049589 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zvfv\" (UniqueName: \"kubernetes.io/projected/7dafdd10-d103-4367-a524-832b689426da-kube-api-access-8zvfv\") pod \"redhat-marketplace-mdnfr\" (UID: \"7dafdd10-d103-4367-a524-832b689426da\") " pod="openshift-marketplace/redhat-marketplace-mdnfr" Dec 05 11:12:14 crc kubenswrapper[4796]: I1205 11:12:14.152564 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dafdd10-d103-4367-a524-832b689426da-utilities\") pod \"redhat-marketplace-mdnfr\" (UID: \"7dafdd10-d103-4367-a524-832b689426da\") " pod="openshift-marketplace/redhat-marketplace-mdnfr" Dec 05 11:12:14 crc kubenswrapper[4796]: I1205 11:12:14.152713 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dafdd10-d103-4367-a524-832b689426da-catalog-content\") pod \"redhat-marketplace-mdnfr\" (UID: \"7dafdd10-d103-4367-a524-832b689426da\") " pod="openshift-marketplace/redhat-marketplace-mdnfr" Dec 05 11:12:14 crc kubenswrapper[4796]: I1205 11:12:14.152964 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zvfv\" (UniqueName: \"kubernetes.io/projected/7dafdd10-d103-4367-a524-832b689426da-kube-api-access-8zvfv\") pod \"redhat-marketplace-mdnfr\" (UID: \"7dafdd10-d103-4367-a524-832b689426da\") " pod="openshift-marketplace/redhat-marketplace-mdnfr" Dec 05 11:12:14 crc kubenswrapper[4796]: I1205 11:12:14.153226 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dafdd10-d103-4367-a524-832b689426da-utilities\") pod \"redhat-marketplace-mdnfr\" (UID: \"7dafdd10-d103-4367-a524-832b689426da\") " pod="openshift-marketplace/redhat-marketplace-mdnfr" Dec 05 11:12:14 crc kubenswrapper[4796]: I1205 11:12:14.153648 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dafdd10-d103-4367-a524-832b689426da-catalog-content\") pod \"redhat-marketplace-mdnfr\" (UID: \"7dafdd10-d103-4367-a524-832b689426da\") " pod="openshift-marketplace/redhat-marketplace-mdnfr" Dec 05 11:12:14 crc kubenswrapper[4796]: I1205 11:12:14.172401 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zvfv\" (UniqueName: \"kubernetes.io/projected/7dafdd10-d103-4367-a524-832b689426da-kube-api-access-8zvfv\") pod \"redhat-marketplace-mdnfr\" (UID: \"7dafdd10-d103-4367-a524-832b689426da\") " pod="openshift-marketplace/redhat-marketplace-mdnfr" Dec 05 11:12:14 crc kubenswrapper[4796]: I1205 11:12:14.263386 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mdnfr" Dec 05 11:12:14 crc kubenswrapper[4796]: I1205 11:12:14.705232 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mdnfr"] Dec 05 11:12:15 crc kubenswrapper[4796]: I1205 11:12:15.614927 4796 generic.go:334] "Generic (PLEG): container finished" podID="7dafdd10-d103-4367-a524-832b689426da" containerID="764099660d1562fb393409ffd016693a7a1dd73ff42756a87812be93d86f6159" exitCode=0 Dec 05 11:12:15 crc kubenswrapper[4796]: I1205 11:12:15.615047 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdnfr" event={"ID":"7dafdd10-d103-4367-a524-832b689426da","Type":"ContainerDied","Data":"764099660d1562fb393409ffd016693a7a1dd73ff42756a87812be93d86f6159"} Dec 05 11:12:15 crc kubenswrapper[4796]: I1205 11:12:15.615380 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdnfr" event={"ID":"7dafdd10-d103-4367-a524-832b689426da","Type":"ContainerStarted","Data":"7fbc51887a66b46860f29ac5ebc3314069a7b95f7faa7e395f97312773d66496"} Dec 05 11:12:16 crc kubenswrapper[4796]: I1205 11:12:16.625820 4796 generic.go:334] "Generic (PLEG): container finished" podID="7dafdd10-d103-4367-a524-832b689426da" containerID="2c75fc531d4e746fb24e20ccde5f1ba98cdc3ca05dd00e7bd46faef0be0c19dc" exitCode=0 Dec 05 11:12:16 crc kubenswrapper[4796]: I1205 11:12:16.625917 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdnfr" event={"ID":"7dafdd10-d103-4367-a524-832b689426da","Type":"ContainerDied","Data":"2c75fc531d4e746fb24e20ccde5f1ba98cdc3ca05dd00e7bd46faef0be0c19dc"} Dec 05 11:12:17 crc kubenswrapper[4796]: I1205 11:12:17.635997 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdnfr" event={"ID":"7dafdd10-d103-4367-a524-832b689426da","Type":"ContainerStarted","Data":"ec37f3b92d97e6717c6fbe64326a0fa5a77760094af0cb41b93d38095f7b4e76"} Dec 05 11:12:17 crc kubenswrapper[4796]: I1205 11:12:17.676142 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mdnfr" podStartSLOduration=3.199212436 podStartE2EDuration="4.676120841s" podCreationTimestamp="2025-12-05 11:12:13 +0000 UTC" firstStartedPulling="2025-12-05 11:12:15.617613473 +0000 UTC m=+2681.905718986" lastFinishedPulling="2025-12-05 11:12:17.094521878 +0000 UTC m=+2683.382627391" observedRunningTime="2025-12-05 11:12:17.655543605 +0000 UTC m=+2683.943649119" watchObservedRunningTime="2025-12-05 11:12:17.676120841 +0000 UTC m=+2683.964226354" Dec 05 11:12:24 crc kubenswrapper[4796]: I1205 11:12:24.264033 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mdnfr" Dec 05 11:12:24 crc kubenswrapper[4796]: I1205 11:12:24.264759 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mdnfr" Dec 05 11:12:24 crc kubenswrapper[4796]: I1205 11:12:24.307874 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mdnfr" Dec 05 11:12:24 crc kubenswrapper[4796]: I1205 11:12:24.742811 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mdnfr" Dec 05 11:12:24 crc kubenswrapper[4796]: I1205 11:12:24.788622 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mdnfr"] Dec 05 11:12:26 crc kubenswrapper[4796]: I1205 11:12:26.718455 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mdnfr" podUID="7dafdd10-d103-4367-a524-832b689426da" containerName="registry-server" containerID="cri-o://ec37f3b92d97e6717c6fbe64326a0fa5a77760094af0cb41b93d38095f7b4e76" gracePeriod=2 Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.166572 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mdnfr" Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.360795 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dafdd10-d103-4367-a524-832b689426da-utilities\") pod \"7dafdd10-d103-4367-a524-832b689426da\" (UID: \"7dafdd10-d103-4367-a524-832b689426da\") " Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.360937 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dafdd10-d103-4367-a524-832b689426da-catalog-content\") pod \"7dafdd10-d103-4367-a524-832b689426da\" (UID: \"7dafdd10-d103-4367-a524-832b689426da\") " Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.361012 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zvfv\" (UniqueName: \"kubernetes.io/projected/7dafdd10-d103-4367-a524-832b689426da-kube-api-access-8zvfv\") pod \"7dafdd10-d103-4367-a524-832b689426da\" (UID: \"7dafdd10-d103-4367-a524-832b689426da\") " Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.361596 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dafdd10-d103-4367-a524-832b689426da-utilities" (OuterVolumeSpecName: "utilities") pod "7dafdd10-d103-4367-a524-832b689426da" (UID: "7dafdd10-d103-4367-a524-832b689426da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.367974 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dafdd10-d103-4367-a524-832b689426da-kube-api-access-8zvfv" (OuterVolumeSpecName: "kube-api-access-8zvfv") pod "7dafdd10-d103-4367-a524-832b689426da" (UID: "7dafdd10-d103-4367-a524-832b689426da"). InnerVolumeSpecName "kube-api-access-8zvfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.376982 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dafdd10-d103-4367-a524-832b689426da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7dafdd10-d103-4367-a524-832b689426da" (UID: "7dafdd10-d103-4367-a524-832b689426da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.464562 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zvfv\" (UniqueName: \"kubernetes.io/projected/7dafdd10-d103-4367-a524-832b689426da-kube-api-access-8zvfv\") on node \"crc\" DevicePath \"\"" Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.464606 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dafdd10-d103-4367-a524-832b689426da-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.464617 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dafdd10-d103-4367-a524-832b689426da-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.726854 4796 generic.go:334] "Generic (PLEG): container finished" podID="7dafdd10-d103-4367-a524-832b689426da" containerID="ec37f3b92d97e6717c6fbe64326a0fa5a77760094af0cb41b93d38095f7b4e76" exitCode=0 Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.726896 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdnfr" event={"ID":"7dafdd10-d103-4367-a524-832b689426da","Type":"ContainerDied","Data":"ec37f3b92d97e6717c6fbe64326a0fa5a77760094af0cb41b93d38095f7b4e76"} Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.726939 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mdnfr" event={"ID":"7dafdd10-d103-4367-a524-832b689426da","Type":"ContainerDied","Data":"7fbc51887a66b46860f29ac5ebc3314069a7b95f7faa7e395f97312773d66496"} Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.726958 4796 scope.go:117] "RemoveContainer" containerID="ec37f3b92d97e6717c6fbe64326a0fa5a77760094af0cb41b93d38095f7b4e76" Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.726907 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mdnfr" Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.749944 4796 scope.go:117] "RemoveContainer" containerID="2c75fc531d4e746fb24e20ccde5f1ba98cdc3ca05dd00e7bd46faef0be0c19dc" Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.767564 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mdnfr"] Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.770397 4796 scope.go:117] "RemoveContainer" containerID="764099660d1562fb393409ffd016693a7a1dd73ff42756a87812be93d86f6159" Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.775394 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mdnfr"] Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.803704 4796 scope.go:117] "RemoveContainer" containerID="ec37f3b92d97e6717c6fbe64326a0fa5a77760094af0cb41b93d38095f7b4e76" Dec 05 11:12:27 crc kubenswrapper[4796]: E1205 11:12:27.804075 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec37f3b92d97e6717c6fbe64326a0fa5a77760094af0cb41b93d38095f7b4e76\": container with ID starting with ec37f3b92d97e6717c6fbe64326a0fa5a77760094af0cb41b93d38095f7b4e76 not found: ID does not exist" containerID="ec37f3b92d97e6717c6fbe64326a0fa5a77760094af0cb41b93d38095f7b4e76" Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.804183 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec37f3b92d97e6717c6fbe64326a0fa5a77760094af0cb41b93d38095f7b4e76"} err="failed to get container status \"ec37f3b92d97e6717c6fbe64326a0fa5a77760094af0cb41b93d38095f7b4e76\": rpc error: code = NotFound desc = could not find container \"ec37f3b92d97e6717c6fbe64326a0fa5a77760094af0cb41b93d38095f7b4e76\": container with ID starting with ec37f3b92d97e6717c6fbe64326a0fa5a77760094af0cb41b93d38095f7b4e76 not found: ID does not exist" Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.804272 4796 scope.go:117] "RemoveContainer" containerID="2c75fc531d4e746fb24e20ccde5f1ba98cdc3ca05dd00e7bd46faef0be0c19dc" Dec 05 11:12:27 crc kubenswrapper[4796]: E1205 11:12:27.804771 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c75fc531d4e746fb24e20ccde5f1ba98cdc3ca05dd00e7bd46faef0be0c19dc\": container with ID starting with 2c75fc531d4e746fb24e20ccde5f1ba98cdc3ca05dd00e7bd46faef0be0c19dc not found: ID does not exist" containerID="2c75fc531d4e746fb24e20ccde5f1ba98cdc3ca05dd00e7bd46faef0be0c19dc" Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.804795 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c75fc531d4e746fb24e20ccde5f1ba98cdc3ca05dd00e7bd46faef0be0c19dc"} err="failed to get container status \"2c75fc531d4e746fb24e20ccde5f1ba98cdc3ca05dd00e7bd46faef0be0c19dc\": rpc error: code = NotFound desc = could not find container \"2c75fc531d4e746fb24e20ccde5f1ba98cdc3ca05dd00e7bd46faef0be0c19dc\": container with ID starting with 2c75fc531d4e746fb24e20ccde5f1ba98cdc3ca05dd00e7bd46faef0be0c19dc not found: ID does not exist" Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.804811 4796 scope.go:117] "RemoveContainer" containerID="764099660d1562fb393409ffd016693a7a1dd73ff42756a87812be93d86f6159" Dec 05 11:12:27 crc kubenswrapper[4796]: E1205 11:12:27.805044 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"764099660d1562fb393409ffd016693a7a1dd73ff42756a87812be93d86f6159\": container with ID starting with 764099660d1562fb393409ffd016693a7a1dd73ff42756a87812be93d86f6159 not found: ID does not exist" containerID="764099660d1562fb393409ffd016693a7a1dd73ff42756a87812be93d86f6159" Dec 05 11:12:27 crc kubenswrapper[4796]: I1205 11:12:27.805077 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"764099660d1562fb393409ffd016693a7a1dd73ff42756a87812be93d86f6159"} err="failed to get container status \"764099660d1562fb393409ffd016693a7a1dd73ff42756a87812be93d86f6159\": rpc error: code = NotFound desc = could not find container \"764099660d1562fb393409ffd016693a7a1dd73ff42756a87812be93d86f6159\": container with ID starting with 764099660d1562fb393409ffd016693a7a1dd73ff42756a87812be93d86f6159 not found: ID does not exist" Dec 05 11:12:28 crc kubenswrapper[4796]: I1205 11:12:28.041715 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dafdd10-d103-4367-a524-832b689426da" path="/var/lib/kubelet/pods/7dafdd10-d103-4367-a524-832b689426da/volumes" Dec 05 11:12:35 crc kubenswrapper[4796]: I1205 11:12:35.177076 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 11:12:35 crc kubenswrapper[4796]: I1205 11:12:35.177620 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 11:12:35 crc kubenswrapper[4796]: I1205 11:12:35.177672 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 11:12:35 crc kubenswrapper[4796]: I1205 11:12:35.178566 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"38444dbc2e54bd632b755d0a32520bd65ff9b96410767c08f22e96e697037a59"} pod="openshift-machine-config-operator/machine-config-daemon-9pllw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 11:12:35 crc kubenswrapper[4796]: I1205 11:12:35.178625 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" containerID="cri-o://38444dbc2e54bd632b755d0a32520bd65ff9b96410767c08f22e96e697037a59" gracePeriod=600 Dec 05 11:12:35 crc kubenswrapper[4796]: I1205 11:12:35.801896 4796 generic.go:334] "Generic (PLEG): container finished" podID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerID="38444dbc2e54bd632b755d0a32520bd65ff9b96410767c08f22e96e697037a59" exitCode=0 Dec 05 11:12:35 crc kubenswrapper[4796]: I1205 11:12:35.802143 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerDied","Data":"38444dbc2e54bd632b755d0a32520bd65ff9b96410767c08f22e96e697037a59"} Dec 05 11:12:35 crc kubenswrapper[4796]: I1205 11:12:35.802405 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerStarted","Data":"c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af"} Dec 05 11:12:35 crc kubenswrapper[4796]: I1205 11:12:35.802431 4796 scope.go:117] "RemoveContainer" containerID="9b0c4fbf43580ff48691bebd4cb8a186137c6199894bef955a64527aaa1f7377" Dec 05 11:12:37 crc kubenswrapper[4796]: I1205 11:12:37.988797 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bk4vf"] Dec 05 11:12:37 crc kubenswrapper[4796]: E1205 11:12:37.990561 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dafdd10-d103-4367-a524-832b689426da" containerName="extract-content" Dec 05 11:12:37 crc kubenswrapper[4796]: I1205 11:12:37.990590 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dafdd10-d103-4367-a524-832b689426da" containerName="extract-content" Dec 05 11:12:37 crc kubenswrapper[4796]: E1205 11:12:37.990603 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dafdd10-d103-4367-a524-832b689426da" containerName="registry-server" Dec 05 11:12:37 crc kubenswrapper[4796]: I1205 11:12:37.990611 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dafdd10-d103-4367-a524-832b689426da" containerName="registry-server" Dec 05 11:12:37 crc kubenswrapper[4796]: E1205 11:12:37.990631 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dafdd10-d103-4367-a524-832b689426da" containerName="extract-utilities" Dec 05 11:12:37 crc kubenswrapper[4796]: I1205 11:12:37.990640 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dafdd10-d103-4367-a524-832b689426da" containerName="extract-utilities" Dec 05 11:12:37 crc kubenswrapper[4796]: I1205 11:12:37.990967 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dafdd10-d103-4367-a524-832b689426da" containerName="registry-server" Dec 05 11:12:37 crc kubenswrapper[4796]: I1205 11:12:37.993052 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bk4vf" Dec 05 11:12:38 crc kubenswrapper[4796]: I1205 11:12:38.001289 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bk4vf"] Dec 05 11:12:38 crc kubenswrapper[4796]: I1205 11:12:38.192884 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/377179e0-c219-4e90-a097-7adc984766b5-utilities\") pod \"community-operators-bk4vf\" (UID: \"377179e0-c219-4e90-a097-7adc984766b5\") " pod="openshift-marketplace/community-operators-bk4vf" Dec 05 11:12:38 crc kubenswrapper[4796]: I1205 11:12:38.193769 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp4wg\" (UniqueName: \"kubernetes.io/projected/377179e0-c219-4e90-a097-7adc984766b5-kube-api-access-dp4wg\") pod \"community-operators-bk4vf\" (UID: \"377179e0-c219-4e90-a097-7adc984766b5\") " pod="openshift-marketplace/community-operators-bk4vf" Dec 05 11:12:38 crc kubenswrapper[4796]: I1205 11:12:38.193995 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/377179e0-c219-4e90-a097-7adc984766b5-catalog-content\") pod \"community-operators-bk4vf\" (UID: \"377179e0-c219-4e90-a097-7adc984766b5\") " pod="openshift-marketplace/community-operators-bk4vf" Dec 05 11:12:38 crc kubenswrapper[4796]: I1205 11:12:38.298568 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/377179e0-c219-4e90-a097-7adc984766b5-catalog-content\") pod \"community-operators-bk4vf\" (UID: \"377179e0-c219-4e90-a097-7adc984766b5\") " pod="openshift-marketplace/community-operators-bk4vf" Dec 05 11:12:38 crc kubenswrapper[4796]: I1205 11:12:38.299423 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/377179e0-c219-4e90-a097-7adc984766b5-catalog-content\") pod \"community-operators-bk4vf\" (UID: \"377179e0-c219-4e90-a097-7adc984766b5\") " pod="openshift-marketplace/community-operators-bk4vf" Dec 05 11:12:38 crc kubenswrapper[4796]: I1205 11:12:38.299675 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/377179e0-c219-4e90-a097-7adc984766b5-utilities\") pod \"community-operators-bk4vf\" (UID: \"377179e0-c219-4e90-a097-7adc984766b5\") " pod="openshift-marketplace/community-operators-bk4vf" Dec 05 11:12:38 crc kubenswrapper[4796]: I1205 11:12:38.299802 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp4wg\" (UniqueName: \"kubernetes.io/projected/377179e0-c219-4e90-a097-7adc984766b5-kube-api-access-dp4wg\") pod \"community-operators-bk4vf\" (UID: \"377179e0-c219-4e90-a097-7adc984766b5\") " pod="openshift-marketplace/community-operators-bk4vf" Dec 05 11:12:38 crc kubenswrapper[4796]: I1205 11:12:38.300492 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/377179e0-c219-4e90-a097-7adc984766b5-utilities\") pod \"community-operators-bk4vf\" (UID: \"377179e0-c219-4e90-a097-7adc984766b5\") " pod="openshift-marketplace/community-operators-bk4vf" Dec 05 11:12:38 crc kubenswrapper[4796]: I1205 11:12:38.318631 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp4wg\" (UniqueName: \"kubernetes.io/projected/377179e0-c219-4e90-a097-7adc984766b5-kube-api-access-dp4wg\") pod \"community-operators-bk4vf\" (UID: \"377179e0-c219-4e90-a097-7adc984766b5\") " pod="openshift-marketplace/community-operators-bk4vf" Dec 05 11:12:38 crc kubenswrapper[4796]: I1205 11:12:38.618472 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bk4vf" Dec 05 11:12:39 crc kubenswrapper[4796]: I1205 11:12:39.033024 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bk4vf"] Dec 05 11:12:39 crc kubenswrapper[4796]: W1205 11:12:39.036142 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod377179e0_c219_4e90_a097_7adc984766b5.slice/crio-ec5fb3fac9fcb656511b55a6d64022789d942666e795350cf6672d8e235a34cd WatchSource:0}: Error finding container ec5fb3fac9fcb656511b55a6d64022789d942666e795350cf6672d8e235a34cd: Status 404 returned error can't find the container with id ec5fb3fac9fcb656511b55a6d64022789d942666e795350cf6672d8e235a34cd Dec 05 11:12:39 crc kubenswrapper[4796]: I1205 11:12:39.851897 4796 generic.go:334] "Generic (PLEG): container finished" podID="377179e0-c219-4e90-a097-7adc984766b5" containerID="d57baeb431afe574c897ef18119991f1aea74441312c26c1a8c70a4ae9defca4" exitCode=0 Dec 05 11:12:39 crc kubenswrapper[4796]: I1205 11:12:39.852026 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk4vf" event={"ID":"377179e0-c219-4e90-a097-7adc984766b5","Type":"ContainerDied","Data":"d57baeb431afe574c897ef18119991f1aea74441312c26c1a8c70a4ae9defca4"} Dec 05 11:12:39 crc kubenswrapper[4796]: I1205 11:12:39.852645 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk4vf" event={"ID":"377179e0-c219-4e90-a097-7adc984766b5","Type":"ContainerStarted","Data":"ec5fb3fac9fcb656511b55a6d64022789d942666e795350cf6672d8e235a34cd"} Dec 05 11:12:40 crc kubenswrapper[4796]: I1205 11:12:40.864589 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk4vf" event={"ID":"377179e0-c219-4e90-a097-7adc984766b5","Type":"ContainerStarted","Data":"0c56697fc125c268fa611df796ff153ea982b45c5c5f5e9b67dc49c7a2a58e00"} Dec 05 11:12:41 crc kubenswrapper[4796]: I1205 11:12:41.876472 4796 generic.go:334] "Generic (PLEG): container finished" podID="377179e0-c219-4e90-a097-7adc984766b5" containerID="0c56697fc125c268fa611df796ff153ea982b45c5c5f5e9b67dc49c7a2a58e00" exitCode=0 Dec 05 11:12:41 crc kubenswrapper[4796]: I1205 11:12:41.876530 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk4vf" event={"ID":"377179e0-c219-4e90-a097-7adc984766b5","Type":"ContainerDied","Data":"0c56697fc125c268fa611df796ff153ea982b45c5c5f5e9b67dc49c7a2a58e00"} Dec 05 11:12:41 crc kubenswrapper[4796]: I1205 11:12:41.877400 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk4vf" event={"ID":"377179e0-c219-4e90-a097-7adc984766b5","Type":"ContainerStarted","Data":"87ec0cec0556c134a398c675cff76cb456afc7d8ba2660aa23256e6ddde0c174"} Dec 05 11:12:41 crc kubenswrapper[4796]: I1205 11:12:41.916912 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bk4vf" podStartSLOduration=3.414326724 podStartE2EDuration="4.916891659s" podCreationTimestamp="2025-12-05 11:12:37 +0000 UTC" firstStartedPulling="2025-12-05 11:12:39.853758126 +0000 UTC m=+2706.141863640" lastFinishedPulling="2025-12-05 11:12:41.356323062 +0000 UTC m=+2707.644428575" observedRunningTime="2025-12-05 11:12:41.898327949 +0000 UTC m=+2708.186433462" watchObservedRunningTime="2025-12-05 11:12:41.916891659 +0000 UTC m=+2708.204997172" Dec 05 11:12:48 crc kubenswrapper[4796]: I1205 11:12:48.619332 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bk4vf" Dec 05 11:12:48 crc kubenswrapper[4796]: I1205 11:12:48.619776 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bk4vf" Dec 05 11:12:48 crc kubenswrapper[4796]: I1205 11:12:48.661485 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bk4vf" Dec 05 11:12:48 crc kubenswrapper[4796]: I1205 11:12:48.975562 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bk4vf" Dec 05 11:12:49 crc kubenswrapper[4796]: I1205 11:12:49.022301 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bk4vf"] Dec 05 11:12:50 crc kubenswrapper[4796]: I1205 11:12:50.951540 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bk4vf" podUID="377179e0-c219-4e90-a097-7adc984766b5" containerName="registry-server" containerID="cri-o://87ec0cec0556c134a398c675cff76cb456afc7d8ba2660aa23256e6ddde0c174" gracePeriod=2 Dec 05 11:12:51 crc kubenswrapper[4796]: I1205 11:12:51.427293 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bk4vf" Dec 05 11:12:51 crc kubenswrapper[4796]: I1205 11:12:51.600181 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/377179e0-c219-4e90-a097-7adc984766b5-utilities\") pod \"377179e0-c219-4e90-a097-7adc984766b5\" (UID: \"377179e0-c219-4e90-a097-7adc984766b5\") " Dec 05 11:12:51 crc kubenswrapper[4796]: I1205 11:12:51.600248 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp4wg\" (UniqueName: \"kubernetes.io/projected/377179e0-c219-4e90-a097-7adc984766b5-kube-api-access-dp4wg\") pod \"377179e0-c219-4e90-a097-7adc984766b5\" (UID: \"377179e0-c219-4e90-a097-7adc984766b5\") " Dec 05 11:12:51 crc kubenswrapper[4796]: I1205 11:12:51.600289 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/377179e0-c219-4e90-a097-7adc984766b5-catalog-content\") pod \"377179e0-c219-4e90-a097-7adc984766b5\" (UID: \"377179e0-c219-4e90-a097-7adc984766b5\") " Dec 05 11:12:51 crc kubenswrapper[4796]: I1205 11:12:51.600881 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/377179e0-c219-4e90-a097-7adc984766b5-utilities" (OuterVolumeSpecName: "utilities") pod "377179e0-c219-4e90-a097-7adc984766b5" (UID: "377179e0-c219-4e90-a097-7adc984766b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:12:51 crc kubenswrapper[4796]: I1205 11:12:51.606032 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/377179e0-c219-4e90-a097-7adc984766b5-kube-api-access-dp4wg" (OuterVolumeSpecName: "kube-api-access-dp4wg") pod "377179e0-c219-4e90-a097-7adc984766b5" (UID: "377179e0-c219-4e90-a097-7adc984766b5"). InnerVolumeSpecName "kube-api-access-dp4wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:12:51 crc kubenswrapper[4796]: I1205 11:12:51.640267 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/377179e0-c219-4e90-a097-7adc984766b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "377179e0-c219-4e90-a097-7adc984766b5" (UID: "377179e0-c219-4e90-a097-7adc984766b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:12:51 crc kubenswrapper[4796]: I1205 11:12:51.703541 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/377179e0-c219-4e90-a097-7adc984766b5-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 11:12:51 crc kubenswrapper[4796]: I1205 11:12:51.703575 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp4wg\" (UniqueName: \"kubernetes.io/projected/377179e0-c219-4e90-a097-7adc984766b5-kube-api-access-dp4wg\") on node \"crc\" DevicePath \"\"" Dec 05 11:12:51 crc kubenswrapper[4796]: I1205 11:12:51.703589 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/377179e0-c219-4e90-a097-7adc984766b5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 11:12:51 crc kubenswrapper[4796]: I1205 11:12:51.972858 4796 generic.go:334] "Generic (PLEG): container finished" podID="377179e0-c219-4e90-a097-7adc984766b5" containerID="87ec0cec0556c134a398c675cff76cb456afc7d8ba2660aa23256e6ddde0c174" exitCode=0 Dec 05 11:12:51 crc kubenswrapper[4796]: I1205 11:12:51.972926 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bk4vf" Dec 05 11:12:51 crc kubenswrapper[4796]: I1205 11:12:51.972940 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk4vf" event={"ID":"377179e0-c219-4e90-a097-7adc984766b5","Type":"ContainerDied","Data":"87ec0cec0556c134a398c675cff76cb456afc7d8ba2660aa23256e6ddde0c174"} Dec 05 11:12:51 crc kubenswrapper[4796]: I1205 11:12:51.973267 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk4vf" event={"ID":"377179e0-c219-4e90-a097-7adc984766b5","Type":"ContainerDied","Data":"ec5fb3fac9fcb656511b55a6d64022789d942666e795350cf6672d8e235a34cd"} Dec 05 11:12:51 crc kubenswrapper[4796]: I1205 11:12:51.973301 4796 scope.go:117] "RemoveContainer" containerID="87ec0cec0556c134a398c675cff76cb456afc7d8ba2660aa23256e6ddde0c174" Dec 05 11:12:52 crc kubenswrapper[4796]: I1205 11:12:52.047776 4796 scope.go:117] "RemoveContainer" containerID="0c56697fc125c268fa611df796ff153ea982b45c5c5f5e9b67dc49c7a2a58e00" Dec 05 11:12:52 crc kubenswrapper[4796]: I1205 11:12:52.060103 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bk4vf"] Dec 05 11:12:52 crc kubenswrapper[4796]: I1205 11:12:52.060140 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bk4vf"] Dec 05 11:12:52 crc kubenswrapper[4796]: I1205 11:12:52.079894 4796 scope.go:117] "RemoveContainer" containerID="d57baeb431afe574c897ef18119991f1aea74441312c26c1a8c70a4ae9defca4" Dec 05 11:12:52 crc kubenswrapper[4796]: I1205 11:12:52.112492 4796 scope.go:117] "RemoveContainer" containerID="87ec0cec0556c134a398c675cff76cb456afc7d8ba2660aa23256e6ddde0c174" Dec 05 11:12:52 crc kubenswrapper[4796]: E1205 11:12:52.112825 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87ec0cec0556c134a398c675cff76cb456afc7d8ba2660aa23256e6ddde0c174\": container with ID starting with 87ec0cec0556c134a398c675cff76cb456afc7d8ba2660aa23256e6ddde0c174 not found: ID does not exist" containerID="87ec0cec0556c134a398c675cff76cb456afc7d8ba2660aa23256e6ddde0c174" Dec 05 11:12:52 crc kubenswrapper[4796]: I1205 11:12:52.112852 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87ec0cec0556c134a398c675cff76cb456afc7d8ba2660aa23256e6ddde0c174"} err="failed to get container status \"87ec0cec0556c134a398c675cff76cb456afc7d8ba2660aa23256e6ddde0c174\": rpc error: code = NotFound desc = could not find container \"87ec0cec0556c134a398c675cff76cb456afc7d8ba2660aa23256e6ddde0c174\": container with ID starting with 87ec0cec0556c134a398c675cff76cb456afc7d8ba2660aa23256e6ddde0c174 not found: ID does not exist" Dec 05 11:12:52 crc kubenswrapper[4796]: I1205 11:12:52.112872 4796 scope.go:117] "RemoveContainer" containerID="0c56697fc125c268fa611df796ff153ea982b45c5c5f5e9b67dc49c7a2a58e00" Dec 05 11:12:52 crc kubenswrapper[4796]: E1205 11:12:52.113134 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c56697fc125c268fa611df796ff153ea982b45c5c5f5e9b67dc49c7a2a58e00\": container with ID starting with 0c56697fc125c268fa611df796ff153ea982b45c5c5f5e9b67dc49c7a2a58e00 not found: ID does not exist" containerID="0c56697fc125c268fa611df796ff153ea982b45c5c5f5e9b67dc49c7a2a58e00" Dec 05 11:12:52 crc kubenswrapper[4796]: I1205 11:12:52.113153 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c56697fc125c268fa611df796ff153ea982b45c5c5f5e9b67dc49c7a2a58e00"} err="failed to get container status \"0c56697fc125c268fa611df796ff153ea982b45c5c5f5e9b67dc49c7a2a58e00\": rpc error: code = NotFound desc = could not find container \"0c56697fc125c268fa611df796ff153ea982b45c5c5f5e9b67dc49c7a2a58e00\": container with ID starting with 0c56697fc125c268fa611df796ff153ea982b45c5c5f5e9b67dc49c7a2a58e00 not found: ID does not exist" Dec 05 11:12:52 crc kubenswrapper[4796]: I1205 11:12:52.113166 4796 scope.go:117] "RemoveContainer" containerID="d57baeb431afe574c897ef18119991f1aea74441312c26c1a8c70a4ae9defca4" Dec 05 11:12:52 crc kubenswrapper[4796]: E1205 11:12:52.113470 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d57baeb431afe574c897ef18119991f1aea74441312c26c1a8c70a4ae9defca4\": container with ID starting with d57baeb431afe574c897ef18119991f1aea74441312c26c1a8c70a4ae9defca4 not found: ID does not exist" containerID="d57baeb431afe574c897ef18119991f1aea74441312c26c1a8c70a4ae9defca4" Dec 05 11:12:52 crc kubenswrapper[4796]: I1205 11:12:52.113512 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d57baeb431afe574c897ef18119991f1aea74441312c26c1a8c70a4ae9defca4"} err="failed to get container status \"d57baeb431afe574c897ef18119991f1aea74441312c26c1a8c70a4ae9defca4\": rpc error: code = NotFound desc = could not find container \"d57baeb431afe574c897ef18119991f1aea74441312c26c1a8c70a4ae9defca4\": container with ID starting with d57baeb431afe574c897ef18119991f1aea74441312c26c1a8c70a4ae9defca4 not found: ID does not exist" Dec 05 11:12:54 crc kubenswrapper[4796]: I1205 11:12:54.041419 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="377179e0-c219-4e90-a097-7adc984766b5" path="/var/lib/kubelet/pods/377179e0-c219-4e90-a097-7adc984766b5/volumes" Dec 05 11:13:01 crc kubenswrapper[4796]: I1205 11:13:01.055547 4796 generic.go:334] "Generic (PLEG): container finished" podID="b3a6fb02-a525-41cb-96f5-ad01c2999e4d" containerID="8578812c55dc487fcd8f20be8da790a3c02693e580e6d0d014995cefcb868764" exitCode=0 Dec 05 11:13:01 crc kubenswrapper[4796]: I1205 11:13:01.055634 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b3a6fb02-a525-41cb-96f5-ad01c2999e4d","Type":"ContainerDied","Data":"8578812c55dc487fcd8f20be8da790a3c02693e580e6d0d014995cefcb868764"} Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.403443 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.524372 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-test-operator-ephemeral-workdir\") pod \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.524428 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-test-operator-ephemeral-temporary\") pod \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.524481 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-ca-certs\") pod \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.524499 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-ssh-key\") pod \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.524585 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.524607 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-config-data\") pod \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.524768 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl75s\" (UniqueName: \"kubernetes.io/projected/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-kube-api-access-fl75s\") pod \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.524829 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-openstack-config-secret\") pod \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.524916 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-openstack-config\") pod \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\" (UID: \"b3a6fb02-a525-41cb-96f5-ad01c2999e4d\") " Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.525552 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-config-data" (OuterVolumeSpecName: "config-data") pod "b3a6fb02-a525-41cb-96f5-ad01c2999e4d" (UID: "b3a6fb02-a525-41cb-96f5-ad01c2999e4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.525665 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "b3a6fb02-a525-41cb-96f5-ad01c2999e4d" (UID: "b3a6fb02-a525-41cb-96f5-ad01c2999e4d"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.531246 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "b3a6fb02-a525-41cb-96f5-ad01c2999e4d" (UID: "b3a6fb02-a525-41cb-96f5-ad01c2999e4d"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.531517 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "b3a6fb02-a525-41cb-96f5-ad01c2999e4d" (UID: "b3a6fb02-a525-41cb-96f5-ad01c2999e4d"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.531589 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-kube-api-access-fl75s" (OuterVolumeSpecName: "kube-api-access-fl75s") pod "b3a6fb02-a525-41cb-96f5-ad01c2999e4d" (UID: "b3a6fb02-a525-41cb-96f5-ad01c2999e4d"). InnerVolumeSpecName "kube-api-access-fl75s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.553012 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "b3a6fb02-a525-41cb-96f5-ad01c2999e4d" (UID: "b3a6fb02-a525-41cb-96f5-ad01c2999e4d"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.553388 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b3a6fb02-a525-41cb-96f5-ad01c2999e4d" (UID: "b3a6fb02-a525-41cb-96f5-ad01c2999e4d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.554059 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "b3a6fb02-a525-41cb-96f5-ad01c2999e4d" (UID: "b3a6fb02-a525-41cb-96f5-ad01c2999e4d"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.567849 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "b3a6fb02-a525-41cb-96f5-ad01c2999e4d" (UID: "b3a6fb02-a525-41cb-96f5-ad01c2999e4d"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.627803 4796 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.627830 4796 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-config-data\") on node \"crc\" DevicePath \"\"" Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.627845 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl75s\" (UniqueName: \"kubernetes.io/projected/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-kube-api-access-fl75s\") on node \"crc\" DevicePath \"\"" Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.627858 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.627868 4796 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.627878 4796 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.627889 4796 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.627897 4796 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.627906 4796 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b3a6fb02-a525-41cb-96f5-ad01c2999e4d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.647116 4796 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 05 11:13:02 crc kubenswrapper[4796]: I1205 11:13:02.730744 4796 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 05 11:13:03 crc kubenswrapper[4796]: I1205 11:13:03.073140 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b3a6fb02-a525-41cb-96f5-ad01c2999e4d","Type":"ContainerDied","Data":"51700bc5713bd90d8a0529b48cbeee4d3bd11792590be149ac5a482b961c1f5f"} Dec 05 11:13:03 crc kubenswrapper[4796]: I1205 11:13:03.073198 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51700bc5713bd90d8a0529b48cbeee4d3bd11792590be149ac5a482b961c1f5f" Dec 05 11:13:03 crc kubenswrapper[4796]: I1205 11:13:03.073196 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 05 11:13:07 crc kubenswrapper[4796]: I1205 11:13:07.425663 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 05 11:13:07 crc kubenswrapper[4796]: E1205 11:13:07.427127 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="377179e0-c219-4e90-a097-7adc984766b5" containerName="registry-server" Dec 05 11:13:07 crc kubenswrapper[4796]: I1205 11:13:07.427149 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="377179e0-c219-4e90-a097-7adc984766b5" containerName="registry-server" Dec 05 11:13:07 crc kubenswrapper[4796]: E1205 11:13:07.427177 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="377179e0-c219-4e90-a097-7adc984766b5" containerName="extract-content" Dec 05 11:13:07 crc kubenswrapper[4796]: I1205 11:13:07.427185 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="377179e0-c219-4e90-a097-7adc984766b5" containerName="extract-content" Dec 05 11:13:07 crc kubenswrapper[4796]: E1205 11:13:07.427217 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="377179e0-c219-4e90-a097-7adc984766b5" containerName="extract-utilities" Dec 05 11:13:07 crc kubenswrapper[4796]: I1205 11:13:07.427225 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="377179e0-c219-4e90-a097-7adc984766b5" containerName="extract-utilities" Dec 05 11:13:07 crc kubenswrapper[4796]: E1205 11:13:07.427233 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3a6fb02-a525-41cb-96f5-ad01c2999e4d" containerName="tempest-tests-tempest-tests-runner" Dec 05 11:13:07 crc kubenswrapper[4796]: I1205 11:13:07.427240 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3a6fb02-a525-41cb-96f5-ad01c2999e4d" containerName="tempest-tests-tempest-tests-runner" Dec 05 11:13:07 crc kubenswrapper[4796]: I1205 11:13:07.427507 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="377179e0-c219-4e90-a097-7adc984766b5" containerName="registry-server" Dec 05 11:13:07 crc kubenswrapper[4796]: I1205 11:13:07.427520 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3a6fb02-a525-41cb-96f5-ad01c2999e4d" containerName="tempest-tests-tempest-tests-runner" Dec 05 11:13:07 crc kubenswrapper[4796]: I1205 11:13:07.428591 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 11:13:07 crc kubenswrapper[4796]: I1205 11:13:07.430962 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-f4wwn" Dec 05 11:13:07 crc kubenswrapper[4796]: I1205 11:13:07.436653 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 05 11:13:07 crc kubenswrapper[4796]: I1205 11:13:07.628645 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8d8df5e9-4f31-4f63-ba36-276c43b02b75\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 11:13:07 crc kubenswrapper[4796]: I1205 11:13:07.628723 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdhxs\" (UniqueName: \"kubernetes.io/projected/8d8df5e9-4f31-4f63-ba36-276c43b02b75-kube-api-access-bdhxs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8d8df5e9-4f31-4f63-ba36-276c43b02b75\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 11:13:07 crc kubenswrapper[4796]: I1205 11:13:07.730604 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdhxs\" (UniqueName: \"kubernetes.io/projected/8d8df5e9-4f31-4f63-ba36-276c43b02b75-kube-api-access-bdhxs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8d8df5e9-4f31-4f63-ba36-276c43b02b75\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 11:13:07 crc kubenswrapper[4796]: I1205 11:13:07.730837 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8d8df5e9-4f31-4f63-ba36-276c43b02b75\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 11:13:07 crc kubenswrapper[4796]: I1205 11:13:07.731240 4796 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8d8df5e9-4f31-4f63-ba36-276c43b02b75\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 11:13:07 crc kubenswrapper[4796]: I1205 11:13:07.749076 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdhxs\" (UniqueName: \"kubernetes.io/projected/8d8df5e9-4f31-4f63-ba36-276c43b02b75-kube-api-access-bdhxs\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8d8df5e9-4f31-4f63-ba36-276c43b02b75\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 11:13:07 crc kubenswrapper[4796]: I1205 11:13:07.754813 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"8d8df5e9-4f31-4f63-ba36-276c43b02b75\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 11:13:08 crc kubenswrapper[4796]: I1205 11:13:08.051768 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 05 11:13:08 crc kubenswrapper[4796]: I1205 11:13:08.491296 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 05 11:13:08 crc kubenswrapper[4796]: W1205 11:13:08.505944 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d8df5e9_4f31_4f63_ba36_276c43b02b75.slice/crio-df5209ad831960e31e8b816be64122a25305d7662eb41158aa234dca70ce7f53 WatchSource:0}: Error finding container df5209ad831960e31e8b816be64122a25305d7662eb41158aa234dca70ce7f53: Status 404 returned error can't find the container with id df5209ad831960e31e8b816be64122a25305d7662eb41158aa234dca70ce7f53 Dec 05 11:13:09 crc kubenswrapper[4796]: I1205 11:13:09.154729 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"8d8df5e9-4f31-4f63-ba36-276c43b02b75","Type":"ContainerStarted","Data":"df5209ad831960e31e8b816be64122a25305d7662eb41158aa234dca70ce7f53"} Dec 05 11:13:10 crc kubenswrapper[4796]: I1205 11:13:10.165212 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"8d8df5e9-4f31-4f63-ba36-276c43b02b75","Type":"ContainerStarted","Data":"bf8f31b6961acaff91de0a74aaccfa84df7d95769d4bba0f1becdc1f8d453597"} Dec 05 11:13:10 crc kubenswrapper[4796]: I1205 11:13:10.182116 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.222044255 podStartE2EDuration="3.182092411s" podCreationTimestamp="2025-12-05 11:13:07 +0000 UTC" firstStartedPulling="2025-12-05 11:13:08.509902476 +0000 UTC m=+2734.798007989" lastFinishedPulling="2025-12-05 11:13:09.469950633 +0000 UTC m=+2735.758056145" observedRunningTime="2025-12-05 11:13:10.178875495 +0000 UTC m=+2736.466981008" watchObservedRunningTime="2025-12-05 11:13:10.182092411 +0000 UTC m=+2736.470197924" Dec 05 11:13:48 crc kubenswrapper[4796]: I1205 11:13:48.075536 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-77xl4"] Dec 05 11:13:48 crc kubenswrapper[4796]: I1205 11:13:48.077768 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-77xl4" Dec 05 11:13:48 crc kubenswrapper[4796]: I1205 11:13:48.084889 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-77xl4"] Dec 05 11:13:48 crc kubenswrapper[4796]: I1205 11:13:48.125397 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m22sf\" (UniqueName: \"kubernetes.io/projected/46aac553-2279-4b0a-9853-11524872068e-kube-api-access-m22sf\") pod \"redhat-operators-77xl4\" (UID: \"46aac553-2279-4b0a-9853-11524872068e\") " pod="openshift-marketplace/redhat-operators-77xl4" Dec 05 11:13:48 crc kubenswrapper[4796]: I1205 11:13:48.125481 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46aac553-2279-4b0a-9853-11524872068e-utilities\") pod \"redhat-operators-77xl4\" (UID: \"46aac553-2279-4b0a-9853-11524872068e\") " pod="openshift-marketplace/redhat-operators-77xl4" Dec 05 11:13:48 crc kubenswrapper[4796]: I1205 11:13:48.125586 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46aac553-2279-4b0a-9853-11524872068e-catalog-content\") pod \"redhat-operators-77xl4\" (UID: \"46aac553-2279-4b0a-9853-11524872068e\") " pod="openshift-marketplace/redhat-operators-77xl4" Dec 05 11:13:48 crc kubenswrapper[4796]: I1205 11:13:48.227713 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46aac553-2279-4b0a-9853-11524872068e-utilities\") pod \"redhat-operators-77xl4\" (UID: \"46aac553-2279-4b0a-9853-11524872068e\") " pod="openshift-marketplace/redhat-operators-77xl4" Dec 05 11:13:48 crc kubenswrapper[4796]: I1205 11:13:48.227833 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46aac553-2279-4b0a-9853-11524872068e-catalog-content\") pod \"redhat-operators-77xl4\" (UID: \"46aac553-2279-4b0a-9853-11524872068e\") " pod="openshift-marketplace/redhat-operators-77xl4" Dec 05 11:13:48 crc kubenswrapper[4796]: I1205 11:13:48.227862 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m22sf\" (UniqueName: \"kubernetes.io/projected/46aac553-2279-4b0a-9853-11524872068e-kube-api-access-m22sf\") pod \"redhat-operators-77xl4\" (UID: \"46aac553-2279-4b0a-9853-11524872068e\") " pod="openshift-marketplace/redhat-operators-77xl4" Dec 05 11:13:48 crc kubenswrapper[4796]: I1205 11:13:48.228530 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46aac553-2279-4b0a-9853-11524872068e-utilities\") pod \"redhat-operators-77xl4\" (UID: \"46aac553-2279-4b0a-9853-11524872068e\") " pod="openshift-marketplace/redhat-operators-77xl4" Dec 05 11:13:48 crc kubenswrapper[4796]: I1205 11:13:48.228763 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46aac553-2279-4b0a-9853-11524872068e-catalog-content\") pod \"redhat-operators-77xl4\" (UID: \"46aac553-2279-4b0a-9853-11524872068e\") " pod="openshift-marketplace/redhat-operators-77xl4" Dec 05 11:13:48 crc kubenswrapper[4796]: I1205 11:13:48.242991 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m22sf\" (UniqueName: \"kubernetes.io/projected/46aac553-2279-4b0a-9853-11524872068e-kube-api-access-m22sf\") pod \"redhat-operators-77xl4\" (UID: \"46aac553-2279-4b0a-9853-11524872068e\") " pod="openshift-marketplace/redhat-operators-77xl4" Dec 05 11:13:48 crc kubenswrapper[4796]: I1205 11:13:48.400404 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-77xl4" Dec 05 11:13:48 crc kubenswrapper[4796]: I1205 11:13:48.804578 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-77xl4"] Dec 05 11:13:49 crc kubenswrapper[4796]: I1205 11:13:49.451615 4796 generic.go:334] "Generic (PLEG): container finished" podID="46aac553-2279-4b0a-9853-11524872068e" containerID="5ccb43b7ed9096e023c226f47048a6bffedb48cec91eb84a33620de78c7714e5" exitCode=0 Dec 05 11:13:49 crc kubenswrapper[4796]: I1205 11:13:49.451675 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77xl4" event={"ID":"46aac553-2279-4b0a-9853-11524872068e","Type":"ContainerDied","Data":"5ccb43b7ed9096e023c226f47048a6bffedb48cec91eb84a33620de78c7714e5"} Dec 05 11:13:49 crc kubenswrapper[4796]: I1205 11:13:49.451731 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77xl4" event={"ID":"46aac553-2279-4b0a-9853-11524872068e","Type":"ContainerStarted","Data":"d275232f33974c69baee5d229c80462c51a006c4c8522e4865c4f8ec3c238d9f"} Dec 05 11:13:50 crc kubenswrapper[4796]: I1205 11:13:50.458786 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77xl4" event={"ID":"46aac553-2279-4b0a-9853-11524872068e","Type":"ContainerStarted","Data":"a9be28df6b1da2446d3d93df5d59ad19823765047f5e7f7147f594797ce9915f"} Dec 05 11:13:51 crc kubenswrapper[4796]: I1205 11:13:51.466632 4796 generic.go:334] "Generic (PLEG): container finished" podID="46aac553-2279-4b0a-9853-11524872068e" containerID="a9be28df6b1da2446d3d93df5d59ad19823765047f5e7f7147f594797ce9915f" exitCode=0 Dec 05 11:13:51 crc kubenswrapper[4796]: I1205 11:13:51.466726 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77xl4" event={"ID":"46aac553-2279-4b0a-9853-11524872068e","Type":"ContainerDied","Data":"a9be28df6b1da2446d3d93df5d59ad19823765047f5e7f7147f594797ce9915f"} Dec 05 11:13:52 crc kubenswrapper[4796]: I1205 11:13:52.475957 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77xl4" event={"ID":"46aac553-2279-4b0a-9853-11524872068e","Type":"ContainerStarted","Data":"41c7fe7f426f9874d16c89ee62a7f9afe17fe3c4ac8a229902efa7b11bcffa02"} Dec 05 11:13:52 crc kubenswrapper[4796]: I1205 11:13:52.491880 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-77xl4" podStartSLOduration=2.003712592 podStartE2EDuration="4.491854598s" podCreationTimestamp="2025-12-05 11:13:48 +0000 UTC" firstStartedPulling="2025-12-05 11:13:49.45341478 +0000 UTC m=+2775.741520292" lastFinishedPulling="2025-12-05 11:13:51.941556784 +0000 UTC m=+2778.229662298" observedRunningTime="2025-12-05 11:13:52.48822785 +0000 UTC m=+2778.776333364" watchObservedRunningTime="2025-12-05 11:13:52.491854598 +0000 UTC m=+2778.779960110" Dec 05 11:13:58 crc kubenswrapper[4796]: I1205 11:13:58.400653 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-77xl4" Dec 05 11:13:58 crc kubenswrapper[4796]: I1205 11:13:58.401756 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-77xl4" Dec 05 11:13:58 crc kubenswrapper[4796]: I1205 11:13:58.436233 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-77xl4" Dec 05 11:13:58 crc kubenswrapper[4796]: I1205 11:13:58.551567 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-77xl4" Dec 05 11:13:58 crc kubenswrapper[4796]: I1205 11:13:58.664506 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-77xl4"] Dec 05 11:14:00 crc kubenswrapper[4796]: I1205 11:14:00.535938 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-77xl4" podUID="46aac553-2279-4b0a-9853-11524872068e" containerName="registry-server" containerID="cri-o://41c7fe7f426f9874d16c89ee62a7f9afe17fe3c4ac8a229902efa7b11bcffa02" gracePeriod=2 Dec 05 11:14:00 crc kubenswrapper[4796]: I1205 11:14:00.905032 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-77xl4" Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.023096 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46aac553-2279-4b0a-9853-11524872068e-catalog-content\") pod \"46aac553-2279-4b0a-9853-11524872068e\" (UID: \"46aac553-2279-4b0a-9853-11524872068e\") " Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.023285 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m22sf\" (UniqueName: \"kubernetes.io/projected/46aac553-2279-4b0a-9853-11524872068e-kube-api-access-m22sf\") pod \"46aac553-2279-4b0a-9853-11524872068e\" (UID: \"46aac553-2279-4b0a-9853-11524872068e\") " Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.023365 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46aac553-2279-4b0a-9853-11524872068e-utilities\") pod \"46aac553-2279-4b0a-9853-11524872068e\" (UID: \"46aac553-2279-4b0a-9853-11524872068e\") " Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.024377 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46aac553-2279-4b0a-9853-11524872068e-utilities" (OuterVolumeSpecName: "utilities") pod "46aac553-2279-4b0a-9853-11524872068e" (UID: "46aac553-2279-4b0a-9853-11524872068e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.028748 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46aac553-2279-4b0a-9853-11524872068e-kube-api-access-m22sf" (OuterVolumeSpecName: "kube-api-access-m22sf") pod "46aac553-2279-4b0a-9853-11524872068e" (UID: "46aac553-2279-4b0a-9853-11524872068e"). InnerVolumeSpecName "kube-api-access-m22sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.125119 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m22sf\" (UniqueName: \"kubernetes.io/projected/46aac553-2279-4b0a-9853-11524872068e-kube-api-access-m22sf\") on node \"crc\" DevicePath \"\"" Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.125148 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46aac553-2279-4b0a-9853-11524872068e-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.545520 4796 generic.go:334] "Generic (PLEG): container finished" podID="46aac553-2279-4b0a-9853-11524872068e" containerID="41c7fe7f426f9874d16c89ee62a7f9afe17fe3c4ac8a229902efa7b11bcffa02" exitCode=0 Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.545600 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-77xl4" Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.545595 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77xl4" event={"ID":"46aac553-2279-4b0a-9853-11524872068e","Type":"ContainerDied","Data":"41c7fe7f426f9874d16c89ee62a7f9afe17fe3c4ac8a229902efa7b11bcffa02"} Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.545710 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77xl4" event={"ID":"46aac553-2279-4b0a-9853-11524872068e","Type":"ContainerDied","Data":"d275232f33974c69baee5d229c80462c51a006c4c8522e4865c4f8ec3c238d9f"} Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.545737 4796 scope.go:117] "RemoveContainer" containerID="41c7fe7f426f9874d16c89ee62a7f9afe17fe3c4ac8a229902efa7b11bcffa02" Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.566210 4796 scope.go:117] "RemoveContainer" containerID="a9be28df6b1da2446d3d93df5d59ad19823765047f5e7f7147f594797ce9915f" Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.584067 4796 scope.go:117] "RemoveContainer" containerID="5ccb43b7ed9096e023c226f47048a6bffedb48cec91eb84a33620de78c7714e5" Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.621950 4796 scope.go:117] "RemoveContainer" containerID="41c7fe7f426f9874d16c89ee62a7f9afe17fe3c4ac8a229902efa7b11bcffa02" Dec 05 11:14:01 crc kubenswrapper[4796]: E1205 11:14:01.622310 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41c7fe7f426f9874d16c89ee62a7f9afe17fe3c4ac8a229902efa7b11bcffa02\": container with ID starting with 41c7fe7f426f9874d16c89ee62a7f9afe17fe3c4ac8a229902efa7b11bcffa02 not found: ID does not exist" containerID="41c7fe7f426f9874d16c89ee62a7f9afe17fe3c4ac8a229902efa7b11bcffa02" Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.622360 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41c7fe7f426f9874d16c89ee62a7f9afe17fe3c4ac8a229902efa7b11bcffa02"} err="failed to get container status \"41c7fe7f426f9874d16c89ee62a7f9afe17fe3c4ac8a229902efa7b11bcffa02\": rpc error: code = NotFound desc = could not find container \"41c7fe7f426f9874d16c89ee62a7f9afe17fe3c4ac8a229902efa7b11bcffa02\": container with ID starting with 41c7fe7f426f9874d16c89ee62a7f9afe17fe3c4ac8a229902efa7b11bcffa02 not found: ID does not exist" Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.622390 4796 scope.go:117] "RemoveContainer" containerID="a9be28df6b1da2446d3d93df5d59ad19823765047f5e7f7147f594797ce9915f" Dec 05 11:14:01 crc kubenswrapper[4796]: E1205 11:14:01.622730 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9be28df6b1da2446d3d93df5d59ad19823765047f5e7f7147f594797ce9915f\": container with ID starting with a9be28df6b1da2446d3d93df5d59ad19823765047f5e7f7147f594797ce9915f not found: ID does not exist" containerID="a9be28df6b1da2446d3d93df5d59ad19823765047f5e7f7147f594797ce9915f" Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.622783 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9be28df6b1da2446d3d93df5d59ad19823765047f5e7f7147f594797ce9915f"} err="failed to get container status \"a9be28df6b1da2446d3d93df5d59ad19823765047f5e7f7147f594797ce9915f\": rpc error: code = NotFound desc = could not find container \"a9be28df6b1da2446d3d93df5d59ad19823765047f5e7f7147f594797ce9915f\": container with ID starting with a9be28df6b1da2446d3d93df5d59ad19823765047f5e7f7147f594797ce9915f not found: ID does not exist" Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.622815 4796 scope.go:117] "RemoveContainer" containerID="5ccb43b7ed9096e023c226f47048a6bffedb48cec91eb84a33620de78c7714e5" Dec 05 11:14:01 crc kubenswrapper[4796]: E1205 11:14:01.623315 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ccb43b7ed9096e023c226f47048a6bffedb48cec91eb84a33620de78c7714e5\": container with ID starting with 5ccb43b7ed9096e023c226f47048a6bffedb48cec91eb84a33620de78c7714e5 not found: ID does not exist" containerID="5ccb43b7ed9096e023c226f47048a6bffedb48cec91eb84a33620de78c7714e5" Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.623367 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ccb43b7ed9096e023c226f47048a6bffedb48cec91eb84a33620de78c7714e5"} err="failed to get container status \"5ccb43b7ed9096e023c226f47048a6bffedb48cec91eb84a33620de78c7714e5\": rpc error: code = NotFound desc = could not find container \"5ccb43b7ed9096e023c226f47048a6bffedb48cec91eb84a33620de78c7714e5\": container with ID starting with 5ccb43b7ed9096e023c226f47048a6bffedb48cec91eb84a33620de78c7714e5 not found: ID does not exist" Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.680700 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46aac553-2279-4b0a-9853-11524872068e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46aac553-2279-4b0a-9853-11524872068e" (UID: "46aac553-2279-4b0a-9853-11524872068e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.736584 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46aac553-2279-4b0a-9853-11524872068e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.872550 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-77xl4"] Dec 05 11:14:01 crc kubenswrapper[4796]: I1205 11:14:01.879498 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-77xl4"] Dec 05 11:14:02 crc kubenswrapper[4796]: I1205 11:14:02.040815 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46aac553-2279-4b0a-9853-11524872068e" path="/var/lib/kubelet/pods/46aac553-2279-4b0a-9853-11524872068e/volumes" Dec 05 11:14:11 crc kubenswrapper[4796]: I1205 11:14:11.670473 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c8nrm/must-gather-ckzkb"] Dec 05 11:14:11 crc kubenswrapper[4796]: E1205 11:14:11.671180 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46aac553-2279-4b0a-9853-11524872068e" containerName="extract-content" Dec 05 11:14:11 crc kubenswrapper[4796]: I1205 11:14:11.671195 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="46aac553-2279-4b0a-9853-11524872068e" containerName="extract-content" Dec 05 11:14:11 crc kubenswrapper[4796]: E1205 11:14:11.671205 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46aac553-2279-4b0a-9853-11524872068e" containerName="registry-server" Dec 05 11:14:11 crc kubenswrapper[4796]: I1205 11:14:11.671212 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="46aac553-2279-4b0a-9853-11524872068e" containerName="registry-server" Dec 05 11:14:11 crc kubenswrapper[4796]: E1205 11:14:11.671239 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46aac553-2279-4b0a-9853-11524872068e" containerName="extract-utilities" Dec 05 11:14:11 crc kubenswrapper[4796]: I1205 11:14:11.671245 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="46aac553-2279-4b0a-9853-11524872068e" containerName="extract-utilities" Dec 05 11:14:11 crc kubenswrapper[4796]: I1205 11:14:11.671436 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="46aac553-2279-4b0a-9853-11524872068e" containerName="registry-server" Dec 05 11:14:11 crc kubenswrapper[4796]: I1205 11:14:11.672368 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c8nrm/must-gather-ckzkb" Dec 05 11:14:11 crc kubenswrapper[4796]: I1205 11:14:11.673994 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-c8nrm"/"openshift-service-ca.crt" Dec 05 11:14:11 crc kubenswrapper[4796]: I1205 11:14:11.674274 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-c8nrm"/"kube-root-ca.crt" Dec 05 11:14:11 crc kubenswrapper[4796]: I1205 11:14:11.683972 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-c8nrm/must-gather-ckzkb"] Dec 05 11:14:11 crc kubenswrapper[4796]: I1205 11:14:11.795473 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrznp\" (UniqueName: \"kubernetes.io/projected/20d3233f-c7b5-40d1-99c3-6e9a68a235c2-kube-api-access-hrznp\") pod \"must-gather-ckzkb\" (UID: \"20d3233f-c7b5-40d1-99c3-6e9a68a235c2\") " pod="openshift-must-gather-c8nrm/must-gather-ckzkb" Dec 05 11:14:11 crc kubenswrapper[4796]: I1205 11:14:11.795708 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/20d3233f-c7b5-40d1-99c3-6e9a68a235c2-must-gather-output\") pod \"must-gather-ckzkb\" (UID: \"20d3233f-c7b5-40d1-99c3-6e9a68a235c2\") " pod="openshift-must-gather-c8nrm/must-gather-ckzkb" Dec 05 11:14:11 crc kubenswrapper[4796]: I1205 11:14:11.897198 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrznp\" (UniqueName: \"kubernetes.io/projected/20d3233f-c7b5-40d1-99c3-6e9a68a235c2-kube-api-access-hrznp\") pod \"must-gather-ckzkb\" (UID: \"20d3233f-c7b5-40d1-99c3-6e9a68a235c2\") " pod="openshift-must-gather-c8nrm/must-gather-ckzkb" Dec 05 11:14:11 crc kubenswrapper[4796]: I1205 11:14:11.897249 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/20d3233f-c7b5-40d1-99c3-6e9a68a235c2-must-gather-output\") pod \"must-gather-ckzkb\" (UID: \"20d3233f-c7b5-40d1-99c3-6e9a68a235c2\") " pod="openshift-must-gather-c8nrm/must-gather-ckzkb" Dec 05 11:14:11 crc kubenswrapper[4796]: I1205 11:14:11.897603 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/20d3233f-c7b5-40d1-99c3-6e9a68a235c2-must-gather-output\") pod \"must-gather-ckzkb\" (UID: \"20d3233f-c7b5-40d1-99c3-6e9a68a235c2\") " pod="openshift-must-gather-c8nrm/must-gather-ckzkb" Dec 05 11:14:11 crc kubenswrapper[4796]: I1205 11:14:11.913617 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrznp\" (UniqueName: \"kubernetes.io/projected/20d3233f-c7b5-40d1-99c3-6e9a68a235c2-kube-api-access-hrznp\") pod \"must-gather-ckzkb\" (UID: \"20d3233f-c7b5-40d1-99c3-6e9a68a235c2\") " pod="openshift-must-gather-c8nrm/must-gather-ckzkb" Dec 05 11:14:12 crc kubenswrapper[4796]: I1205 11:14:12.008799 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c8nrm/must-gather-ckzkb" Dec 05 11:14:12 crc kubenswrapper[4796]: I1205 11:14:12.398941 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-c8nrm/must-gather-ckzkb"] Dec 05 11:14:12 crc kubenswrapper[4796]: I1205 11:14:12.623347 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c8nrm/must-gather-ckzkb" event={"ID":"20d3233f-c7b5-40d1-99c3-6e9a68a235c2","Type":"ContainerStarted","Data":"877c830bc66736022a35728f1f28da8eea175b9a8d57fcc39e1b36e02235b555"} Dec 05 11:14:20 crc kubenswrapper[4796]: I1205 11:14:20.704892 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c8nrm/must-gather-ckzkb" event={"ID":"20d3233f-c7b5-40d1-99c3-6e9a68a235c2","Type":"ContainerStarted","Data":"779637d66d012fc5a0bbe06c3288c9091b7265da06c291efaecea4b0edd3f53a"} Dec 05 11:14:20 crc kubenswrapper[4796]: I1205 11:14:20.705191 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c8nrm/must-gather-ckzkb" event={"ID":"20d3233f-c7b5-40d1-99c3-6e9a68a235c2","Type":"ContainerStarted","Data":"d604d82c81f7616f0130c115263d782a01141ca09f154cf40b96f28d45126c3c"} Dec 05 11:14:20 crc kubenswrapper[4796]: I1205 11:14:20.720889 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-c8nrm/must-gather-ckzkb" podStartSLOduration=2.059094797 podStartE2EDuration="9.720875028s" podCreationTimestamp="2025-12-05 11:14:11 +0000 UTC" firstStartedPulling="2025-12-05 11:14:12.396451902 +0000 UTC m=+2798.684557415" lastFinishedPulling="2025-12-05 11:14:20.058232133 +0000 UTC m=+2806.346337646" observedRunningTime="2025-12-05 11:14:20.715625371 +0000 UTC m=+2807.003730884" watchObservedRunningTime="2025-12-05 11:14:20.720875028 +0000 UTC m=+2807.008980540" Dec 05 11:14:23 crc kubenswrapper[4796]: I1205 11:14:23.010957 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c8nrm/crc-debug-kwmww"] Dec 05 11:14:23 crc kubenswrapper[4796]: I1205 11:14:23.012801 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c8nrm/crc-debug-kwmww" Dec 05 11:14:23 crc kubenswrapper[4796]: I1205 11:14:23.018665 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-c8nrm"/"default-dockercfg-rgs4t" Dec 05 11:14:23 crc kubenswrapper[4796]: I1205 11:14:23.107842 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcb82\" (UniqueName: \"kubernetes.io/projected/9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9-kube-api-access-hcb82\") pod \"crc-debug-kwmww\" (UID: \"9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9\") " pod="openshift-must-gather-c8nrm/crc-debug-kwmww" Dec 05 11:14:23 crc kubenswrapper[4796]: I1205 11:14:23.107911 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9-host\") pod \"crc-debug-kwmww\" (UID: \"9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9\") " pod="openshift-must-gather-c8nrm/crc-debug-kwmww" Dec 05 11:14:23 crc kubenswrapper[4796]: I1205 11:14:23.210376 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9-host\") pod \"crc-debug-kwmww\" (UID: \"9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9\") " pod="openshift-must-gather-c8nrm/crc-debug-kwmww" Dec 05 11:14:23 crc kubenswrapper[4796]: I1205 11:14:23.210510 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9-host\") pod \"crc-debug-kwmww\" (UID: \"9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9\") " pod="openshift-must-gather-c8nrm/crc-debug-kwmww" Dec 05 11:14:23 crc kubenswrapper[4796]: I1205 11:14:23.210523 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcb82\" (UniqueName: \"kubernetes.io/projected/9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9-kube-api-access-hcb82\") pod \"crc-debug-kwmww\" (UID: \"9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9\") " pod="openshift-must-gather-c8nrm/crc-debug-kwmww" Dec 05 11:14:23 crc kubenswrapper[4796]: I1205 11:14:23.226488 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcb82\" (UniqueName: \"kubernetes.io/projected/9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9-kube-api-access-hcb82\") pod \"crc-debug-kwmww\" (UID: \"9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9\") " pod="openshift-must-gather-c8nrm/crc-debug-kwmww" Dec 05 11:14:23 crc kubenswrapper[4796]: I1205 11:14:23.327218 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c8nrm/crc-debug-kwmww" Dec 05 11:14:23 crc kubenswrapper[4796]: I1205 11:14:23.738013 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c8nrm/crc-debug-kwmww" event={"ID":"9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9","Type":"ContainerStarted","Data":"589200b76292bfa105810153b5f5847a5b86a6d2dbd58812daff5ccfd3330931"} Dec 05 11:14:32 crc kubenswrapper[4796]: I1205 11:14:32.815208 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c8nrm/crc-debug-kwmww" event={"ID":"9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9","Type":"ContainerStarted","Data":"f641734d8f5af51af59654ac268548f2bd2d3363eb96ff25e523ac6d0f22c47e"} Dec 05 11:14:32 crc kubenswrapper[4796]: I1205 11:14:32.827175 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-c8nrm/crc-debug-kwmww" podStartSLOduration=1.019370339 podStartE2EDuration="9.827160512s" podCreationTimestamp="2025-12-05 11:14:23 +0000 UTC" firstStartedPulling="2025-12-05 11:14:23.353982785 +0000 UTC m=+2809.642088298" lastFinishedPulling="2025-12-05 11:14:32.161772958 +0000 UTC m=+2818.449878471" observedRunningTime="2025-12-05 11:14:32.825480765 +0000 UTC m=+2819.113586278" watchObservedRunningTime="2025-12-05 11:14:32.827160512 +0000 UTC m=+2819.115266025" Dec 05 11:14:35 crc kubenswrapper[4796]: I1205 11:14:35.177794 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 11:14:35 crc kubenswrapper[4796]: I1205 11:14:35.178318 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 11:15:00 crc kubenswrapper[4796]: I1205 11:15:00.143341 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415555-sldvp"] Dec 05 11:15:00 crc kubenswrapper[4796]: I1205 11:15:00.145321 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415555-sldvp" Dec 05 11:15:00 crc kubenswrapper[4796]: I1205 11:15:00.146868 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 05 11:15:00 crc kubenswrapper[4796]: I1205 11:15:00.147392 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 05 11:15:00 crc kubenswrapper[4796]: I1205 11:15:00.150384 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415555-sldvp"] Dec 05 11:15:00 crc kubenswrapper[4796]: I1205 11:15:00.327822 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s2np\" (UniqueName: \"kubernetes.io/projected/ac1d671d-5a17-44f2-8131-1cc6f851a77b-kube-api-access-2s2np\") pod \"collect-profiles-29415555-sldvp\" (UID: \"ac1d671d-5a17-44f2-8131-1cc6f851a77b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415555-sldvp" Dec 05 11:15:00 crc kubenswrapper[4796]: I1205 11:15:00.327988 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac1d671d-5a17-44f2-8131-1cc6f851a77b-config-volume\") pod \"collect-profiles-29415555-sldvp\" (UID: \"ac1d671d-5a17-44f2-8131-1cc6f851a77b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415555-sldvp" Dec 05 11:15:00 crc kubenswrapper[4796]: I1205 11:15:00.329295 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac1d671d-5a17-44f2-8131-1cc6f851a77b-secret-volume\") pod \"collect-profiles-29415555-sldvp\" (UID: \"ac1d671d-5a17-44f2-8131-1cc6f851a77b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415555-sldvp" Dec 05 11:15:00 crc kubenswrapper[4796]: I1205 11:15:00.430955 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac1d671d-5a17-44f2-8131-1cc6f851a77b-config-volume\") pod \"collect-profiles-29415555-sldvp\" (UID: \"ac1d671d-5a17-44f2-8131-1cc6f851a77b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415555-sldvp" Dec 05 11:15:00 crc kubenswrapper[4796]: I1205 11:15:00.431271 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac1d671d-5a17-44f2-8131-1cc6f851a77b-secret-volume\") pod \"collect-profiles-29415555-sldvp\" (UID: \"ac1d671d-5a17-44f2-8131-1cc6f851a77b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415555-sldvp" Dec 05 11:15:00 crc kubenswrapper[4796]: I1205 11:15:00.431306 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s2np\" (UniqueName: \"kubernetes.io/projected/ac1d671d-5a17-44f2-8131-1cc6f851a77b-kube-api-access-2s2np\") pod \"collect-profiles-29415555-sldvp\" (UID: \"ac1d671d-5a17-44f2-8131-1cc6f851a77b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415555-sldvp" Dec 05 11:15:00 crc kubenswrapper[4796]: I1205 11:15:00.431855 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac1d671d-5a17-44f2-8131-1cc6f851a77b-config-volume\") pod \"collect-profiles-29415555-sldvp\" (UID: \"ac1d671d-5a17-44f2-8131-1cc6f851a77b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415555-sldvp" Dec 05 11:15:00 crc kubenswrapper[4796]: I1205 11:15:00.440385 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac1d671d-5a17-44f2-8131-1cc6f851a77b-secret-volume\") pod \"collect-profiles-29415555-sldvp\" (UID: \"ac1d671d-5a17-44f2-8131-1cc6f851a77b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415555-sldvp" Dec 05 11:15:00 crc kubenswrapper[4796]: I1205 11:15:00.445251 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s2np\" (UniqueName: \"kubernetes.io/projected/ac1d671d-5a17-44f2-8131-1cc6f851a77b-kube-api-access-2s2np\") pod \"collect-profiles-29415555-sldvp\" (UID: \"ac1d671d-5a17-44f2-8131-1cc6f851a77b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29415555-sldvp" Dec 05 11:15:00 crc kubenswrapper[4796]: I1205 11:15:00.461111 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415555-sldvp" Dec 05 11:15:00 crc kubenswrapper[4796]: I1205 11:15:00.879651 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415555-sldvp"] Dec 05 11:15:01 crc kubenswrapper[4796]: I1205 11:15:01.057429 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415555-sldvp" event={"ID":"ac1d671d-5a17-44f2-8131-1cc6f851a77b","Type":"ContainerStarted","Data":"effdeef501be437e1c087871440c29605e54043fd21ce739dca2ba69c933294a"} Dec 05 11:15:01 crc kubenswrapper[4796]: I1205 11:15:01.057479 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415555-sldvp" event={"ID":"ac1d671d-5a17-44f2-8131-1cc6f851a77b","Type":"ContainerStarted","Data":"6f788db15cf9b10e3b971e650345ae19fa8cffa0c85acff8ce90c8c827c24899"} Dec 05 11:15:01 crc kubenswrapper[4796]: I1205 11:15:01.081187 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29415555-sldvp" podStartSLOduration=1.081169479 podStartE2EDuration="1.081169479s" podCreationTimestamp="2025-12-05 11:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 11:15:01.074833491 +0000 UTC m=+2847.362939014" watchObservedRunningTime="2025-12-05 11:15:01.081169479 +0000 UTC m=+2847.369274992" Dec 05 11:15:02 crc kubenswrapper[4796]: I1205 11:15:02.064471 4796 generic.go:334] "Generic (PLEG): container finished" podID="ac1d671d-5a17-44f2-8131-1cc6f851a77b" containerID="effdeef501be437e1c087871440c29605e54043fd21ce739dca2ba69c933294a" exitCode=0 Dec 05 11:15:02 crc kubenswrapper[4796]: I1205 11:15:02.065100 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415555-sldvp" event={"ID":"ac1d671d-5a17-44f2-8131-1cc6f851a77b","Type":"ContainerDied","Data":"effdeef501be437e1c087871440c29605e54043fd21ce739dca2ba69c933294a"} Dec 05 11:15:03 crc kubenswrapper[4796]: I1205 11:15:03.365370 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415555-sldvp" Dec 05 11:15:03 crc kubenswrapper[4796]: I1205 11:15:03.387547 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac1d671d-5a17-44f2-8131-1cc6f851a77b-secret-volume\") pod \"ac1d671d-5a17-44f2-8131-1cc6f851a77b\" (UID: \"ac1d671d-5a17-44f2-8131-1cc6f851a77b\") " Dec 05 11:15:03 crc kubenswrapper[4796]: I1205 11:15:03.387645 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s2np\" (UniqueName: \"kubernetes.io/projected/ac1d671d-5a17-44f2-8131-1cc6f851a77b-kube-api-access-2s2np\") pod \"ac1d671d-5a17-44f2-8131-1cc6f851a77b\" (UID: \"ac1d671d-5a17-44f2-8131-1cc6f851a77b\") " Dec 05 11:15:03 crc kubenswrapper[4796]: I1205 11:15:03.387672 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac1d671d-5a17-44f2-8131-1cc6f851a77b-config-volume\") pod \"ac1d671d-5a17-44f2-8131-1cc6f851a77b\" (UID: \"ac1d671d-5a17-44f2-8131-1cc6f851a77b\") " Dec 05 11:15:03 crc kubenswrapper[4796]: I1205 11:15:03.388526 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac1d671d-5a17-44f2-8131-1cc6f851a77b-config-volume" (OuterVolumeSpecName: "config-volume") pod "ac1d671d-5a17-44f2-8131-1cc6f851a77b" (UID: "ac1d671d-5a17-44f2-8131-1cc6f851a77b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 05 11:15:03 crc kubenswrapper[4796]: I1205 11:15:03.394172 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac1d671d-5a17-44f2-8131-1cc6f851a77b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ac1d671d-5a17-44f2-8131-1cc6f851a77b" (UID: "ac1d671d-5a17-44f2-8131-1cc6f851a77b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 05 11:15:03 crc kubenswrapper[4796]: I1205 11:15:03.394500 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac1d671d-5a17-44f2-8131-1cc6f851a77b-kube-api-access-2s2np" (OuterVolumeSpecName: "kube-api-access-2s2np") pod "ac1d671d-5a17-44f2-8131-1cc6f851a77b" (UID: "ac1d671d-5a17-44f2-8131-1cc6f851a77b"). InnerVolumeSpecName "kube-api-access-2s2np". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:15:03 crc kubenswrapper[4796]: I1205 11:15:03.490099 4796 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac1d671d-5a17-44f2-8131-1cc6f851a77b-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 05 11:15:03 crc kubenswrapper[4796]: I1205 11:15:03.490401 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s2np\" (UniqueName: \"kubernetes.io/projected/ac1d671d-5a17-44f2-8131-1cc6f851a77b-kube-api-access-2s2np\") on node \"crc\" DevicePath \"\"" Dec 05 11:15:03 crc kubenswrapper[4796]: I1205 11:15:03.490464 4796 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac1d671d-5a17-44f2-8131-1cc6f851a77b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 05 11:15:04 crc kubenswrapper[4796]: I1205 11:15:04.079486 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29415555-sldvp" event={"ID":"ac1d671d-5a17-44f2-8131-1cc6f851a77b","Type":"ContainerDied","Data":"6f788db15cf9b10e3b971e650345ae19fa8cffa0c85acff8ce90c8c827c24899"} Dec 05 11:15:04 crc kubenswrapper[4796]: I1205 11:15:04.079527 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f788db15cf9b10e3b971e650345ae19fa8cffa0c85acff8ce90c8c827c24899" Dec 05 11:15:04 crc kubenswrapper[4796]: I1205 11:15:04.079710 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29415555-sldvp" Dec 05 11:15:04 crc kubenswrapper[4796]: I1205 11:15:04.444651 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415510-kdvgw"] Dec 05 11:15:04 crc kubenswrapper[4796]: I1205 11:15:04.458111 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29415510-kdvgw"] Dec 05 11:15:05 crc kubenswrapper[4796]: I1205 11:15:05.089747 4796 generic.go:334] "Generic (PLEG): container finished" podID="9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9" containerID="f641734d8f5af51af59654ac268548f2bd2d3363eb96ff25e523ac6d0f22c47e" exitCode=0 Dec 05 11:15:05 crc kubenswrapper[4796]: I1205 11:15:05.089802 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c8nrm/crc-debug-kwmww" event={"ID":"9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9","Type":"ContainerDied","Data":"f641734d8f5af51af59654ac268548f2bd2d3363eb96ff25e523ac6d0f22c47e"} Dec 05 11:15:05 crc kubenswrapper[4796]: I1205 11:15:05.177612 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 11:15:05 crc kubenswrapper[4796]: I1205 11:15:05.177697 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 11:15:06 crc kubenswrapper[4796]: I1205 11:15:06.040890 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ea62ad1-69f6-43c0-a663-293a0346277c" path="/var/lib/kubelet/pods/7ea62ad1-69f6-43c0-a663-293a0346277c/volumes" Dec 05 11:15:06 crc kubenswrapper[4796]: I1205 11:15:06.172165 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c8nrm/crc-debug-kwmww" Dec 05 11:15:06 crc kubenswrapper[4796]: I1205 11:15:06.196387 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-c8nrm/crc-debug-kwmww"] Dec 05 11:15:06 crc kubenswrapper[4796]: I1205 11:15:06.201786 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-c8nrm/crc-debug-kwmww"] Dec 05 11:15:06 crc kubenswrapper[4796]: I1205 11:15:06.233629 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9-host\") pod \"9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9\" (UID: \"9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9\") " Dec 05 11:15:06 crc kubenswrapper[4796]: I1205 11:15:06.233743 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9-host" (OuterVolumeSpecName: "host") pod "9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9" (UID: "9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 11:15:06 crc kubenswrapper[4796]: I1205 11:15:06.233784 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcb82\" (UniqueName: \"kubernetes.io/projected/9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9-kube-api-access-hcb82\") pod \"9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9\" (UID: \"9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9\") " Dec 05 11:15:06 crc kubenswrapper[4796]: I1205 11:15:06.234282 4796 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9-host\") on node \"crc\" DevicePath \"\"" Dec 05 11:15:06 crc kubenswrapper[4796]: I1205 11:15:06.240199 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9-kube-api-access-hcb82" (OuterVolumeSpecName: "kube-api-access-hcb82") pod "9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9" (UID: "9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9"). InnerVolumeSpecName "kube-api-access-hcb82". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:15:06 crc kubenswrapper[4796]: I1205 11:15:06.352412 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcb82\" (UniqueName: \"kubernetes.io/projected/9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9-kube-api-access-hcb82\") on node \"crc\" DevicePath \"\"" Dec 05 11:15:07 crc kubenswrapper[4796]: I1205 11:15:07.104821 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="589200b76292bfa105810153b5f5847a5b86a6d2dbd58812daff5ccfd3330931" Dec 05 11:15:07 crc kubenswrapper[4796]: I1205 11:15:07.105520 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c8nrm/crc-debug-kwmww" Dec 05 11:15:07 crc kubenswrapper[4796]: I1205 11:15:07.345295 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c8nrm/crc-debug-qks5g"] Dec 05 11:15:07 crc kubenswrapper[4796]: E1205 11:15:07.345715 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac1d671d-5a17-44f2-8131-1cc6f851a77b" containerName="collect-profiles" Dec 05 11:15:07 crc kubenswrapper[4796]: I1205 11:15:07.345732 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac1d671d-5a17-44f2-8131-1cc6f851a77b" containerName="collect-profiles" Dec 05 11:15:07 crc kubenswrapper[4796]: E1205 11:15:07.345758 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9" containerName="container-00" Dec 05 11:15:07 crc kubenswrapper[4796]: I1205 11:15:07.345764 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9" containerName="container-00" Dec 05 11:15:07 crc kubenswrapper[4796]: I1205 11:15:07.345972 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9" containerName="container-00" Dec 05 11:15:07 crc kubenswrapper[4796]: I1205 11:15:07.345994 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac1d671d-5a17-44f2-8131-1cc6f851a77b" containerName="collect-profiles" Dec 05 11:15:07 crc kubenswrapper[4796]: I1205 11:15:07.346598 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c8nrm/crc-debug-qks5g" Dec 05 11:15:07 crc kubenswrapper[4796]: I1205 11:15:07.348423 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-c8nrm"/"default-dockercfg-rgs4t" Dec 05 11:15:07 crc kubenswrapper[4796]: I1205 11:15:07.470558 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgvtc\" (UniqueName: \"kubernetes.io/projected/aea276e8-22c2-4c9f-b504-1ed3b4a306de-kube-api-access-wgvtc\") pod \"crc-debug-qks5g\" (UID: \"aea276e8-22c2-4c9f-b504-1ed3b4a306de\") " pod="openshift-must-gather-c8nrm/crc-debug-qks5g" Dec 05 11:15:07 crc kubenswrapper[4796]: I1205 11:15:07.470743 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aea276e8-22c2-4c9f-b504-1ed3b4a306de-host\") pod \"crc-debug-qks5g\" (UID: \"aea276e8-22c2-4c9f-b504-1ed3b4a306de\") " pod="openshift-must-gather-c8nrm/crc-debug-qks5g" Dec 05 11:15:07 crc kubenswrapper[4796]: I1205 11:15:07.571886 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgvtc\" (UniqueName: \"kubernetes.io/projected/aea276e8-22c2-4c9f-b504-1ed3b4a306de-kube-api-access-wgvtc\") pod \"crc-debug-qks5g\" (UID: \"aea276e8-22c2-4c9f-b504-1ed3b4a306de\") " pod="openshift-must-gather-c8nrm/crc-debug-qks5g" Dec 05 11:15:07 crc kubenswrapper[4796]: I1205 11:15:07.572167 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aea276e8-22c2-4c9f-b504-1ed3b4a306de-host\") pod \"crc-debug-qks5g\" (UID: \"aea276e8-22c2-4c9f-b504-1ed3b4a306de\") " pod="openshift-must-gather-c8nrm/crc-debug-qks5g" Dec 05 11:15:07 crc kubenswrapper[4796]: I1205 11:15:07.572258 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aea276e8-22c2-4c9f-b504-1ed3b4a306de-host\") pod \"crc-debug-qks5g\" (UID: \"aea276e8-22c2-4c9f-b504-1ed3b4a306de\") " pod="openshift-must-gather-c8nrm/crc-debug-qks5g" Dec 05 11:15:07 crc kubenswrapper[4796]: I1205 11:15:07.589662 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgvtc\" (UniqueName: \"kubernetes.io/projected/aea276e8-22c2-4c9f-b504-1ed3b4a306de-kube-api-access-wgvtc\") pod \"crc-debug-qks5g\" (UID: \"aea276e8-22c2-4c9f-b504-1ed3b4a306de\") " pod="openshift-must-gather-c8nrm/crc-debug-qks5g" Dec 05 11:15:07 crc kubenswrapper[4796]: I1205 11:15:07.660516 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c8nrm/crc-debug-qks5g" Dec 05 11:15:08 crc kubenswrapper[4796]: I1205 11:15:08.041488 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9" path="/var/lib/kubelet/pods/9218cd6e-bda0-47c9-a98e-3c5f9cc9a0d9/volumes" Dec 05 11:15:08 crc kubenswrapper[4796]: I1205 11:15:08.113826 4796 generic.go:334] "Generic (PLEG): container finished" podID="aea276e8-22c2-4c9f-b504-1ed3b4a306de" containerID="e8eb6d24a884a78d10990423b356eb9bc47909d83c4be9d9c7eaea9e023952e3" exitCode=0 Dec 05 11:15:08 crc kubenswrapper[4796]: I1205 11:15:08.113875 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c8nrm/crc-debug-qks5g" event={"ID":"aea276e8-22c2-4c9f-b504-1ed3b4a306de","Type":"ContainerDied","Data":"e8eb6d24a884a78d10990423b356eb9bc47909d83c4be9d9c7eaea9e023952e3"} Dec 05 11:15:08 crc kubenswrapper[4796]: I1205 11:15:08.113902 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c8nrm/crc-debug-qks5g" event={"ID":"aea276e8-22c2-4c9f-b504-1ed3b4a306de","Type":"ContainerStarted","Data":"d612c1df26c973dcb37a62aa98d6d6a3987a1780d29e85cb380b3647b093b193"} Dec 05 11:15:08 crc kubenswrapper[4796]: I1205 11:15:08.591089 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-c8nrm/crc-debug-qks5g"] Dec 05 11:15:08 crc kubenswrapper[4796]: I1205 11:15:08.597984 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-c8nrm/crc-debug-qks5g"] Dec 05 11:15:09 crc kubenswrapper[4796]: I1205 11:15:09.189559 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c8nrm/crc-debug-qks5g" Dec 05 11:15:09 crc kubenswrapper[4796]: I1205 11:15:09.296423 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgvtc\" (UniqueName: \"kubernetes.io/projected/aea276e8-22c2-4c9f-b504-1ed3b4a306de-kube-api-access-wgvtc\") pod \"aea276e8-22c2-4c9f-b504-1ed3b4a306de\" (UID: \"aea276e8-22c2-4c9f-b504-1ed3b4a306de\") " Dec 05 11:15:09 crc kubenswrapper[4796]: I1205 11:15:09.296461 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aea276e8-22c2-4c9f-b504-1ed3b4a306de-host\") pod \"aea276e8-22c2-4c9f-b504-1ed3b4a306de\" (UID: \"aea276e8-22c2-4c9f-b504-1ed3b4a306de\") " Dec 05 11:15:09 crc kubenswrapper[4796]: I1205 11:15:09.296617 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aea276e8-22c2-4c9f-b504-1ed3b4a306de-host" (OuterVolumeSpecName: "host") pod "aea276e8-22c2-4c9f-b504-1ed3b4a306de" (UID: "aea276e8-22c2-4c9f-b504-1ed3b4a306de"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 11:15:09 crc kubenswrapper[4796]: I1205 11:15:09.297064 4796 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aea276e8-22c2-4c9f-b504-1ed3b4a306de-host\") on node \"crc\" DevicePath \"\"" Dec 05 11:15:09 crc kubenswrapper[4796]: I1205 11:15:09.301406 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aea276e8-22c2-4c9f-b504-1ed3b4a306de-kube-api-access-wgvtc" (OuterVolumeSpecName: "kube-api-access-wgvtc") pod "aea276e8-22c2-4c9f-b504-1ed3b4a306de" (UID: "aea276e8-22c2-4c9f-b504-1ed3b4a306de"). InnerVolumeSpecName "kube-api-access-wgvtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:15:09 crc kubenswrapper[4796]: I1205 11:15:09.399457 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgvtc\" (UniqueName: \"kubernetes.io/projected/aea276e8-22c2-4c9f-b504-1ed3b4a306de-kube-api-access-wgvtc\") on node \"crc\" DevicePath \"\"" Dec 05 11:15:09 crc kubenswrapper[4796]: I1205 11:15:09.702202 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c8nrm/crc-debug-s4kwx"] Dec 05 11:15:09 crc kubenswrapper[4796]: E1205 11:15:09.702578 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aea276e8-22c2-4c9f-b504-1ed3b4a306de" containerName="container-00" Dec 05 11:15:09 crc kubenswrapper[4796]: I1205 11:15:09.702597 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="aea276e8-22c2-4c9f-b504-1ed3b4a306de" containerName="container-00" Dec 05 11:15:09 crc kubenswrapper[4796]: I1205 11:15:09.702777 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="aea276e8-22c2-4c9f-b504-1ed3b4a306de" containerName="container-00" Dec 05 11:15:09 crc kubenswrapper[4796]: I1205 11:15:09.703341 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c8nrm/crc-debug-s4kwx" Dec 05 11:15:09 crc kubenswrapper[4796]: I1205 11:15:09.805636 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0bd5c4f6-17ec-4b96-af71-718928571298-host\") pod \"crc-debug-s4kwx\" (UID: \"0bd5c4f6-17ec-4b96-af71-718928571298\") " pod="openshift-must-gather-c8nrm/crc-debug-s4kwx" Dec 05 11:15:09 crc kubenswrapper[4796]: I1205 11:15:09.805701 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7b7g\" (UniqueName: \"kubernetes.io/projected/0bd5c4f6-17ec-4b96-af71-718928571298-kube-api-access-v7b7g\") pod \"crc-debug-s4kwx\" (UID: \"0bd5c4f6-17ec-4b96-af71-718928571298\") " pod="openshift-must-gather-c8nrm/crc-debug-s4kwx" Dec 05 11:15:09 crc kubenswrapper[4796]: I1205 11:15:09.907049 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0bd5c4f6-17ec-4b96-af71-718928571298-host\") pod \"crc-debug-s4kwx\" (UID: \"0bd5c4f6-17ec-4b96-af71-718928571298\") " pod="openshift-must-gather-c8nrm/crc-debug-s4kwx" Dec 05 11:15:09 crc kubenswrapper[4796]: I1205 11:15:09.907096 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7b7g\" (UniqueName: \"kubernetes.io/projected/0bd5c4f6-17ec-4b96-af71-718928571298-kube-api-access-v7b7g\") pod \"crc-debug-s4kwx\" (UID: \"0bd5c4f6-17ec-4b96-af71-718928571298\") " pod="openshift-must-gather-c8nrm/crc-debug-s4kwx" Dec 05 11:15:09 crc kubenswrapper[4796]: I1205 11:15:09.907206 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0bd5c4f6-17ec-4b96-af71-718928571298-host\") pod \"crc-debug-s4kwx\" (UID: \"0bd5c4f6-17ec-4b96-af71-718928571298\") " pod="openshift-must-gather-c8nrm/crc-debug-s4kwx" Dec 05 11:15:09 crc kubenswrapper[4796]: I1205 11:15:09.923091 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7b7g\" (UniqueName: \"kubernetes.io/projected/0bd5c4f6-17ec-4b96-af71-718928571298-kube-api-access-v7b7g\") pod \"crc-debug-s4kwx\" (UID: \"0bd5c4f6-17ec-4b96-af71-718928571298\") " pod="openshift-must-gather-c8nrm/crc-debug-s4kwx" Dec 05 11:15:10 crc kubenswrapper[4796]: I1205 11:15:10.015566 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c8nrm/crc-debug-s4kwx" Dec 05 11:15:10 crc kubenswrapper[4796]: I1205 11:15:10.038831 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aea276e8-22c2-4c9f-b504-1ed3b4a306de" path="/var/lib/kubelet/pods/aea276e8-22c2-4c9f-b504-1ed3b4a306de/volumes" Dec 05 11:15:10 crc kubenswrapper[4796]: W1205 11:15:10.042771 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bd5c4f6_17ec_4b96_af71_718928571298.slice/crio-7dcc7c553210439546399365d5ffdfd25e4e0a1573c0240dd9872f65b8e13ce2 WatchSource:0}: Error finding container 7dcc7c553210439546399365d5ffdfd25e4e0a1573c0240dd9872f65b8e13ce2: Status 404 returned error can't find the container with id 7dcc7c553210439546399365d5ffdfd25e4e0a1573c0240dd9872f65b8e13ce2 Dec 05 11:15:10 crc kubenswrapper[4796]: I1205 11:15:10.128013 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c8nrm/crc-debug-s4kwx" event={"ID":"0bd5c4f6-17ec-4b96-af71-718928571298","Type":"ContainerStarted","Data":"7dcc7c553210439546399365d5ffdfd25e4e0a1573c0240dd9872f65b8e13ce2"} Dec 05 11:15:10 crc kubenswrapper[4796]: I1205 11:15:10.129992 4796 scope.go:117] "RemoveContainer" containerID="e8eb6d24a884a78d10990423b356eb9bc47909d83c4be9d9c7eaea9e023952e3" Dec 05 11:15:10 crc kubenswrapper[4796]: I1205 11:15:10.130097 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c8nrm/crc-debug-qks5g" Dec 05 11:15:11 crc kubenswrapper[4796]: I1205 11:15:11.140146 4796 generic.go:334] "Generic (PLEG): container finished" podID="0bd5c4f6-17ec-4b96-af71-718928571298" containerID="001d74038ff679a8d6610d669a355f7d7ff1226bc9cee5277f581b90f140906f" exitCode=0 Dec 05 11:15:11 crc kubenswrapper[4796]: I1205 11:15:11.140263 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c8nrm/crc-debug-s4kwx" event={"ID":"0bd5c4f6-17ec-4b96-af71-718928571298","Type":"ContainerDied","Data":"001d74038ff679a8d6610d669a355f7d7ff1226bc9cee5277f581b90f140906f"} Dec 05 11:15:11 crc kubenswrapper[4796]: I1205 11:15:11.175581 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-c8nrm/crc-debug-s4kwx"] Dec 05 11:15:11 crc kubenswrapper[4796]: I1205 11:15:11.181051 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-c8nrm/crc-debug-s4kwx"] Dec 05 11:15:12 crc kubenswrapper[4796]: I1205 11:15:12.230095 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c8nrm/crc-debug-s4kwx" Dec 05 11:15:12 crc kubenswrapper[4796]: I1205 11:15:12.245862 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0bd5c4f6-17ec-4b96-af71-718928571298-host\") pod \"0bd5c4f6-17ec-4b96-af71-718928571298\" (UID: \"0bd5c4f6-17ec-4b96-af71-718928571298\") " Dec 05 11:15:12 crc kubenswrapper[4796]: I1205 11:15:12.246006 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bd5c4f6-17ec-4b96-af71-718928571298-host" (OuterVolumeSpecName: "host") pod "0bd5c4f6-17ec-4b96-af71-718928571298" (UID: "0bd5c4f6-17ec-4b96-af71-718928571298"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 11:15:12 crc kubenswrapper[4796]: I1205 11:15:12.246152 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7b7g\" (UniqueName: \"kubernetes.io/projected/0bd5c4f6-17ec-4b96-af71-718928571298-kube-api-access-v7b7g\") pod \"0bd5c4f6-17ec-4b96-af71-718928571298\" (UID: \"0bd5c4f6-17ec-4b96-af71-718928571298\") " Dec 05 11:15:12 crc kubenswrapper[4796]: I1205 11:15:12.246518 4796 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0bd5c4f6-17ec-4b96-af71-718928571298-host\") on node \"crc\" DevicePath \"\"" Dec 05 11:15:12 crc kubenswrapper[4796]: I1205 11:15:12.251312 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bd5c4f6-17ec-4b96-af71-718928571298-kube-api-access-v7b7g" (OuterVolumeSpecName: "kube-api-access-v7b7g") pod "0bd5c4f6-17ec-4b96-af71-718928571298" (UID: "0bd5c4f6-17ec-4b96-af71-718928571298"). InnerVolumeSpecName "kube-api-access-v7b7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:15:12 crc kubenswrapper[4796]: I1205 11:15:12.347669 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7b7g\" (UniqueName: \"kubernetes.io/projected/0bd5c4f6-17ec-4b96-af71-718928571298-kube-api-access-v7b7g\") on node \"crc\" DevicePath \"\"" Dec 05 11:15:13 crc kubenswrapper[4796]: I1205 11:15:13.157162 4796 scope.go:117] "RemoveContainer" containerID="001d74038ff679a8d6610d669a355f7d7ff1226bc9cee5277f581b90f140906f" Dec 05 11:15:13 crc kubenswrapper[4796]: I1205 11:15:13.157221 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c8nrm/crc-debug-s4kwx" Dec 05 11:15:14 crc kubenswrapper[4796]: I1205 11:15:14.039021 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bd5c4f6-17ec-4b96-af71-718928571298" path="/var/lib/kubelet/pods/0bd5c4f6-17ec-4b96-af71-718928571298/volumes" Dec 05 11:15:22 crc kubenswrapper[4796]: I1205 11:15:22.987145 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-56d7c49fdd-qssn9_1f02773b-d7af-447e-ab61-e59b12b5b138/barbican-api/0.log" Dec 05 11:15:23 crc kubenswrapper[4796]: I1205 11:15:23.060267 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-56d7c49fdd-qssn9_1f02773b-d7af-447e-ab61-e59b12b5b138/barbican-api-log/0.log" Dec 05 11:15:23 crc kubenswrapper[4796]: I1205 11:15:23.123196 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-b4d7f4754-f9kqr_6edb9afc-40e5-4a55-bf3e-b77c4fe4951b/barbican-keystone-listener/0.log" Dec 05 11:15:23 crc kubenswrapper[4796]: I1205 11:15:23.190978 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-b4d7f4754-f9kqr_6edb9afc-40e5-4a55-bf3e-b77c4fe4951b/barbican-keystone-listener-log/0.log" Dec 05 11:15:23 crc kubenswrapper[4796]: I1205 11:15:23.296475 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9b577c7dc-s8nmt_116dc4c5-e13e-494d-8909-3a3e23c45ec1/barbican-worker/0.log" Dec 05 11:15:23 crc kubenswrapper[4796]: I1205 11:15:23.302895 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9b577c7dc-s8nmt_116dc4c5-e13e-494d-8909-3a3e23c45ec1/barbican-worker-log/0.log" Dec 05 11:15:23 crc kubenswrapper[4796]: I1205 11:15:23.439291 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c_9ce2ee96-991c-49bc-b64d-1dee82bc425a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:15:23 crc kubenswrapper[4796]: I1205 11:15:23.497700 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ea696700-f56d-4ca9-a810-410a2061a80e/ceilometer-central-agent/0.log" Dec 05 11:15:23 crc kubenswrapper[4796]: I1205 11:15:23.569736 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ea696700-f56d-4ca9-a810-410a2061a80e/ceilometer-notification-agent/0.log" Dec 05 11:15:23 crc kubenswrapper[4796]: I1205 11:15:23.602873 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ea696700-f56d-4ca9-a810-410a2061a80e/proxy-httpd/0.log" Dec 05 11:15:23 crc kubenswrapper[4796]: I1205 11:15:23.633243 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ea696700-f56d-4ca9-a810-410a2061a80e/sg-core/0.log" Dec 05 11:15:23 crc kubenswrapper[4796]: I1205 11:15:23.748505 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8ffca929-12a6-40b2-96ee-ff84ea1818dc/cinder-api/0.log" Dec 05 11:15:23 crc kubenswrapper[4796]: I1205 11:15:23.754412 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8ffca929-12a6-40b2-96ee-ff84ea1818dc/cinder-api-log/0.log" Dec 05 11:15:23 crc kubenswrapper[4796]: I1205 11:15:23.875796 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c53203c0-53e8-4b2b-90d3-a9833bd9e7f2/cinder-scheduler/0.log" Dec 05 11:15:23 crc kubenswrapper[4796]: I1205 11:15:23.918844 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c53203c0-53e8-4b2b-90d3-a9833bd9e7f2/probe/0.log" Dec 05 11:15:24 crc kubenswrapper[4796]: I1205 11:15:24.023845 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r_4856a801-fa7d-4150-b557-1b1a0066ce78/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:15:24 crc kubenswrapper[4796]: I1205 11:15:24.074129 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-6mflz_b4f493f2-c177-4784-8c6c-07c52336c07a/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:15:24 crc kubenswrapper[4796]: I1205 11:15:24.188549 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6574f55bb5-jfc7f_0dc9fea5-76b4-465f-9a96-a198004f4c2c/init/0.log" Dec 05 11:15:24 crc kubenswrapper[4796]: I1205 11:15:24.353534 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6574f55bb5-jfc7f_0dc9fea5-76b4-465f-9a96-a198004f4c2c/dnsmasq-dns/0.log" Dec 05 11:15:24 crc kubenswrapper[4796]: I1205 11:15:24.386768 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6574f55bb5-jfc7f_0dc9fea5-76b4-465f-9a96-a198004f4c2c/init/0.log" Dec 05 11:15:24 crc kubenswrapper[4796]: I1205 11:15:24.401293 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz_a1045534-e8dd-4d18-a198-d50d1af5d79b/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:15:24 crc kubenswrapper[4796]: I1205 11:15:24.523411 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f/glance-httpd/0.log" Dec 05 11:15:24 crc kubenswrapper[4796]: I1205 11:15:24.540701 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f/glance-log/0.log" Dec 05 11:15:24 crc kubenswrapper[4796]: I1205 11:15:24.667557 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3e544456-c953-47f0-b274-0fc5d07483ce/glance-httpd/0.log" Dec 05 11:15:24 crc kubenswrapper[4796]: I1205 11:15:24.690724 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3e544456-c953-47f0-b274-0fc5d07483ce/glance-log/0.log" Dec 05 11:15:24 crc kubenswrapper[4796]: I1205 11:15:24.774710 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-78ddb58-f7j44_05623376-2343-40fb-a4df-508ce1e333e2/horizon/0.log" Dec 05 11:15:24 crc kubenswrapper[4796]: I1205 11:15:24.944940 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp_c5d11e0e-4240-4769-8a8a-945f78970a6c/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:15:24 crc kubenswrapper[4796]: I1205 11:15:24.989920 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-78ddb58-f7j44_05623376-2343-40fb-a4df-508ce1e333e2/horizon-log/0.log" Dec 05 11:15:25 crc kubenswrapper[4796]: I1205 11:15:25.058267 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-ktktx_61349c4c-5e04-4781-bbd0-1e6930083dd1/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:15:25 crc kubenswrapper[4796]: I1205 11:15:25.255754 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29415541-7j2vs_8745cb1c-046c-423c-ada4-99fac12690eb/keystone-cron/0.log" Dec 05 11:15:25 crc kubenswrapper[4796]: I1205 11:15:25.257220 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5dbb66bc57-9lclx_e6b0a08e-b4c3-4947-9c4e-67a863d92dca/keystone-api/0.log" Dec 05 11:15:25 crc kubenswrapper[4796]: I1205 11:15:25.363877 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_0e251696-adfe-46cb-87c3-651b3e038af2/kube-state-metrics/0.log" Dec 05 11:15:25 crc kubenswrapper[4796]: I1205 11:15:25.447945 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8_0aba1742-7328-48a0-b9f5-af4c66636de3/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:15:25 crc kubenswrapper[4796]: I1205 11:15:25.800020 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-fb965878c-qncj9_22c4293b-736f-4c5a-b47d-6a0a870bf1da/neutron-api/0.log" Dec 05 11:15:25 crc kubenswrapper[4796]: I1205 11:15:25.836957 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-fb965878c-qncj9_22c4293b-736f-4c5a-b47d-6a0a870bf1da/neutron-httpd/0.log" Dec 05 11:15:26 crc kubenswrapper[4796]: I1205 11:15:26.060172 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk_692ab668-84d9-4673-8601-c09b4025b5fe/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:15:26 crc kubenswrapper[4796]: I1205 11:15:26.675958 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b1096b18-a70c-4076-ac0c-0a57532ec40e/nova-cell0-conductor-conductor/0.log" Dec 05 11:15:26 crc kubenswrapper[4796]: I1205 11:15:26.689188 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f0b532d4-1ab1-4d26-ac86-269a32a1bade/nova-api-log/0.log" Dec 05 11:15:26 crc kubenswrapper[4796]: I1205 11:15:26.788927 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_70590d4b-3f25-4359-8a2d-984d9d98a9ed/nova-cell1-conductor-conductor/0.log" Dec 05 11:15:26 crc kubenswrapper[4796]: I1205 11:15:26.860141 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f0b532d4-1ab1-4d26-ac86-269a32a1bade/nova-api-api/0.log" Dec 05 11:15:27 crc kubenswrapper[4796]: I1205 11:15:27.122722 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6d012a13-a56f-40e9-9689-43405f7c5cfd/nova-cell1-novncproxy-novncproxy/0.log" Dec 05 11:15:27 crc kubenswrapper[4796]: I1205 11:15:27.212402 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-47tqx_43ae5283-caa9-4308-b825-3c937081341c/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:15:27 crc kubenswrapper[4796]: I1205 11:15:27.383215 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76/nova-metadata-log/0.log" Dec 05 11:15:27 crc kubenswrapper[4796]: I1205 11:15:27.627844 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e1935b08-3dfe-4990-9296-634f2dde999f/nova-scheduler-scheduler/0.log" Dec 05 11:15:27 crc kubenswrapper[4796]: I1205 11:15:27.668090 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0deffc65-bff4-419f-aa12-2c17432112a3/mysql-bootstrap/0.log" Dec 05 11:15:27 crc kubenswrapper[4796]: I1205 11:15:27.870375 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0deffc65-bff4-419f-aa12-2c17432112a3/mysql-bootstrap/0.log" Dec 05 11:15:27 crc kubenswrapper[4796]: I1205 11:15:27.916982 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0deffc65-bff4-419f-aa12-2c17432112a3/galera/0.log" Dec 05 11:15:28 crc kubenswrapper[4796]: I1205 11:15:28.048770 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e649ad8d-0ed0-495c-9abc-7220d750f060/mysql-bootstrap/0.log" Dec 05 11:15:28 crc kubenswrapper[4796]: I1205 11:15:28.173486 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e649ad8d-0ed0-495c-9abc-7220d750f060/mysql-bootstrap/0.log" Dec 05 11:15:28 crc kubenswrapper[4796]: I1205 11:15:28.215423 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e649ad8d-0ed0-495c-9abc-7220d750f060/galera/0.log" Dec 05 11:15:28 crc kubenswrapper[4796]: I1205 11:15:28.303223 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76/nova-metadata-metadata/0.log" Dec 05 11:15:28 crc kubenswrapper[4796]: I1205 11:15:28.338240 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_127c1064-4744-43ab-afb3-91c03cee795d/openstackclient/0.log" Dec 05 11:15:28 crc kubenswrapper[4796]: I1205 11:15:28.477930 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-thfdb_dbbb0427-a102-4a95-a44e-d809d4334090/openstack-network-exporter/0.log" Dec 05 11:15:28 crc kubenswrapper[4796]: I1205 11:15:28.569408 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-z7p8z_f90214e2-6d6d-42b7-8a46-0fb779d31cba/ovsdb-server-init/0.log" Dec 05 11:15:28 crc kubenswrapper[4796]: I1205 11:15:28.760184 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-z7p8z_f90214e2-6d6d-42b7-8a46-0fb779d31cba/ovsdb-server-init/0.log" Dec 05 11:15:28 crc kubenswrapper[4796]: I1205 11:15:28.827958 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-z7p8z_f90214e2-6d6d-42b7-8a46-0fb779d31cba/ovsdb-server/0.log" Dec 05 11:15:28 crc kubenswrapper[4796]: I1205 11:15:28.843736 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-z7p8z_f90214e2-6d6d-42b7-8a46-0fb779d31cba/ovs-vswitchd/0.log" Dec 05 11:15:28 crc kubenswrapper[4796]: I1205 11:15:28.944746 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-vb5l5_2f5f2848-5f90-4a9f-a6f0-b6e83b586402/ovn-controller/0.log" Dec 05 11:15:29 crc kubenswrapper[4796]: I1205 11:15:29.104883 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-wbr88_313cdd2e-bea3-40ac-aee2-b0452b059735/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:15:29 crc kubenswrapper[4796]: I1205 11:15:29.157369 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_75432bf8-6355-495f-aa1d-94928e9b15ba/openstack-network-exporter/0.log" Dec 05 11:15:29 crc kubenswrapper[4796]: I1205 11:15:29.230711 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_75432bf8-6355-495f-aa1d-94928e9b15ba/ovn-northd/0.log" Dec 05 11:15:29 crc kubenswrapper[4796]: I1205 11:15:29.294623 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7c35af0e-1df4-4529-a60f-3be3faaf8ec2/openstack-network-exporter/0.log" Dec 05 11:15:29 crc kubenswrapper[4796]: I1205 11:15:29.346621 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7c35af0e-1df4-4529-a60f-3be3faaf8ec2/ovsdbserver-nb/0.log" Dec 05 11:15:29 crc kubenswrapper[4796]: I1205 11:15:29.579429 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_929f06e1-44b6-4ce2-9391-4d41a94538fb/openstack-network-exporter/0.log" Dec 05 11:15:29 crc kubenswrapper[4796]: I1205 11:15:29.585297 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_929f06e1-44b6-4ce2-9391-4d41a94538fb/ovsdbserver-sb/0.log" Dec 05 11:15:29 crc kubenswrapper[4796]: I1205 11:15:29.705119 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-644c454648-8vjkb_8f219a85-81d2-4337-bed6-507debdb79dd/placement-api/0.log" Dec 05 11:15:29 crc kubenswrapper[4796]: I1205 11:15:29.804999 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-644c454648-8vjkb_8f219a85-81d2-4337-bed6-507debdb79dd/placement-log/0.log" Dec 05 11:15:29 crc kubenswrapper[4796]: I1205 11:15:29.855228 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d179e637-ffa5-41af-9038-6728586665a6/setup-container/0.log" Dec 05 11:15:30 crc kubenswrapper[4796]: I1205 11:15:30.051168 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d179e637-ffa5-41af-9038-6728586665a6/setup-container/0.log" Dec 05 11:15:30 crc kubenswrapper[4796]: I1205 11:15:30.056924 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d179e637-ffa5-41af-9038-6728586665a6/rabbitmq/0.log" Dec 05 11:15:30 crc kubenswrapper[4796]: I1205 11:15:30.064538 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_76ca18a4-f216-4325-b15a-adda1d95dddd/setup-container/0.log" Dec 05 11:15:30 crc kubenswrapper[4796]: I1205 11:15:30.289203 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_76ca18a4-f216-4325-b15a-adda1d95dddd/rabbitmq/0.log" Dec 05 11:15:30 crc kubenswrapper[4796]: I1205 11:15:30.305002 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_76ca18a4-f216-4325-b15a-adda1d95dddd/setup-container/0.log" Dec 05 11:15:30 crc kubenswrapper[4796]: I1205 11:15:30.314308 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr_b3328299-a078-40d9-90fd-94a0b4145ae5/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:15:30 crc kubenswrapper[4796]: I1205 11:15:30.563793 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-rqd97_c43b4994-6331-4ed4-9180-1b32253929cf/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:15:30 crc kubenswrapper[4796]: I1205 11:15:30.605020 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc_224a1954-bad6-417b-8942-de297ca3195c/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:15:30 crc kubenswrapper[4796]: I1205 11:15:30.776625 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-xssb4_ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:15:30 crc kubenswrapper[4796]: I1205 11:15:30.858044 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-t9tj8_9f94d556-2c7c-42db-8c80-521d055ccc68/ssh-known-hosts-edpm-deployment/0.log" Dec 05 11:15:30 crc kubenswrapper[4796]: I1205 11:15:30.978238 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-799657985-knzrm_9e4aeaf3-d2d1-43ab-8594-d293d8602be5/proxy-server/0.log" Dec 05 11:15:31 crc kubenswrapper[4796]: I1205 11:15:31.077011 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-799657985-knzrm_9e4aeaf3-d2d1-43ab-8594-d293d8602be5/proxy-httpd/0.log" Dec 05 11:15:31 crc kubenswrapper[4796]: I1205 11:15:31.113277 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-42p7z_5bfeb9c9-1808-4f43-b61b-4fafe36cda09/swift-ring-rebalance/0.log" Dec 05 11:15:31 crc kubenswrapper[4796]: I1205 11:15:31.300868 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/account-auditor/0.log" Dec 05 11:15:31 crc kubenswrapper[4796]: I1205 11:15:31.333003 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/account-reaper/0.log" Dec 05 11:15:31 crc kubenswrapper[4796]: I1205 11:15:31.349705 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/account-replicator/0.log" Dec 05 11:15:31 crc kubenswrapper[4796]: I1205 11:15:31.411037 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/account-server/0.log" Dec 05 11:15:31 crc kubenswrapper[4796]: I1205 11:15:31.492503 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/container-auditor/0.log" Dec 05 11:15:31 crc kubenswrapper[4796]: I1205 11:15:31.543912 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/container-server/0.log" Dec 05 11:15:31 crc kubenswrapper[4796]: I1205 11:15:31.579126 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/container-replicator/0.log" Dec 05 11:15:31 crc kubenswrapper[4796]: I1205 11:15:31.660700 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/container-updater/0.log" Dec 05 11:15:31 crc kubenswrapper[4796]: I1205 11:15:31.706950 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/object-auditor/0.log" Dec 05 11:15:31 crc kubenswrapper[4796]: I1205 11:15:31.764052 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/object-replicator/0.log" Dec 05 11:15:31 crc kubenswrapper[4796]: I1205 11:15:31.778334 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/object-expirer/0.log" Dec 05 11:15:31 crc kubenswrapper[4796]: I1205 11:15:31.821638 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/object-server/0.log" Dec 05 11:15:31 crc kubenswrapper[4796]: I1205 11:15:31.885194 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/object-updater/0.log" Dec 05 11:15:31 crc kubenswrapper[4796]: I1205 11:15:31.907282 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/rsync/0.log" Dec 05 11:15:31 crc kubenswrapper[4796]: I1205 11:15:31.982178 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/swift-recon-cron/0.log" Dec 05 11:15:32 crc kubenswrapper[4796]: I1205 11:15:32.134454 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4_91c465f7-7f18-43b4-9b15-d24ed713432f/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:15:32 crc kubenswrapper[4796]: I1205 11:15:32.214537 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_b3a6fb02-a525-41cb-96f5-ad01c2999e4d/tempest-tests-tempest-tests-runner/0.log" Dec 05 11:15:32 crc kubenswrapper[4796]: I1205 11:15:32.330195 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_8d8df5e9-4f31-4f63-ba36-276c43b02b75/test-operator-logs-container/0.log" Dec 05 11:15:32 crc kubenswrapper[4796]: I1205 11:15:32.444447 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6_589c89c5-f3fd-44c9-ab63-8b1e4774d28f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:15:35 crc kubenswrapper[4796]: I1205 11:15:35.177876 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 11:15:35 crc kubenswrapper[4796]: I1205 11:15:35.178306 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 11:15:35 crc kubenswrapper[4796]: I1205 11:15:35.178367 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 11:15:35 crc kubenswrapper[4796]: I1205 11:15:35.179109 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af"} pod="openshift-machine-config-operator/machine-config-daemon-9pllw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 11:15:35 crc kubenswrapper[4796]: I1205 11:15:35.179162 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" containerID="cri-o://c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" gracePeriod=600 Dec 05 11:15:35 crc kubenswrapper[4796]: E1205 11:15:35.305987 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:15:35 crc kubenswrapper[4796]: I1205 11:15:35.353628 4796 generic.go:334] "Generic (PLEG): container finished" podID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" exitCode=0 Dec 05 11:15:35 crc kubenswrapper[4796]: I1205 11:15:35.353671 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerDied","Data":"c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af"} Dec 05 11:15:35 crc kubenswrapper[4796]: I1205 11:15:35.353835 4796 scope.go:117] "RemoveContainer" containerID="38444dbc2e54bd632b755d0a32520bd65ff9b96410767c08f22e96e697037a59" Dec 05 11:15:35 crc kubenswrapper[4796]: I1205 11:15:35.354818 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:15:35 crc kubenswrapper[4796]: E1205 11:15:35.355358 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:15:41 crc kubenswrapper[4796]: I1205 11:15:41.350316 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_c41250df-9d98-441a-a88f-6a17034b8d31/memcached/0.log" Dec 05 11:15:49 crc kubenswrapper[4796]: I1205 11:15:49.326073 4796 scope.go:117] "RemoveContainer" containerID="62a2978bab674f67dc7cc4e0692d055d45c445ec858d6c2ad79c884b4e06289d" Dec 05 11:15:50 crc kubenswrapper[4796]: I1205 11:15:50.031665 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:15:50 crc kubenswrapper[4796]: E1205 11:15:50.032192 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:15:51 crc kubenswrapper[4796]: I1205 11:15:51.391721 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb_ce917c34-c2b6-4c47-ac86-8f9bd3e903a2/util/0.log" Dec 05 11:15:51 crc kubenswrapper[4796]: I1205 11:15:51.496572 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb_ce917c34-c2b6-4c47-ac86-8f9bd3e903a2/pull/0.log" Dec 05 11:15:51 crc kubenswrapper[4796]: I1205 11:15:51.507454 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb_ce917c34-c2b6-4c47-ac86-8f9bd3e903a2/util/0.log" Dec 05 11:15:51 crc kubenswrapper[4796]: I1205 11:15:51.549633 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb_ce917c34-c2b6-4c47-ac86-8f9bd3e903a2/pull/0.log" Dec 05 11:15:51 crc kubenswrapper[4796]: I1205 11:15:51.678041 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb_ce917c34-c2b6-4c47-ac86-8f9bd3e903a2/util/0.log" Dec 05 11:15:51 crc kubenswrapper[4796]: I1205 11:15:51.709675 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb_ce917c34-c2b6-4c47-ac86-8f9bd3e903a2/pull/0.log" Dec 05 11:15:51 crc kubenswrapper[4796]: I1205 11:15:51.733571 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb_ce917c34-c2b6-4c47-ac86-8f9bd3e903a2/extract/0.log" Dec 05 11:15:51 crc kubenswrapper[4796]: I1205 11:15:51.823465 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-748df9766b-sv8rl_1fe815d0-1127-44a3-8d89-9964b3b5bbc2/kube-rbac-proxy/0.log" Dec 05 11:15:51 crc kubenswrapper[4796]: I1205 11:15:51.877379 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-748df9766b-sv8rl_1fe815d0-1127-44a3-8d89-9964b3b5bbc2/manager/0.log" Dec 05 11:15:51 crc kubenswrapper[4796]: I1205 11:15:51.901469 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b45f74f94-l9pgt_ecfc5e0d-8538-497e-b578-0ef75e0031db/kube-rbac-proxy/0.log" Dec 05 11:15:51 crc kubenswrapper[4796]: I1205 11:15:51.994507 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b45f74f94-l9pgt_ecfc5e0d-8538-497e-b578-0ef75e0031db/manager/0.log" Dec 05 11:15:52 crc kubenswrapper[4796]: I1205 11:15:52.051824 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-5644f4c99-b5lst_42939bc6-488c-401e-a313-3b5cc9e75f3b/manager/0.log" Dec 05 11:15:52 crc kubenswrapper[4796]: I1205 11:15:52.059211 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-5644f4c99-b5lst_42939bc6-488c-401e-a313-3b5cc9e75f3b/kube-rbac-proxy/0.log" Dec 05 11:15:52 crc kubenswrapper[4796]: I1205 11:15:52.175127 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6f75fb6b58-gz4gq_3d773ce3-0d67-4965-b84e-86f922daad38/kube-rbac-proxy/0.log" Dec 05 11:15:52 crc kubenswrapper[4796]: I1205 11:15:52.255844 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6f75fb6b58-gz4gq_3d773ce3-0d67-4965-b84e-86f922daad38/manager/0.log" Dec 05 11:15:52 crc kubenswrapper[4796]: I1205 11:15:52.346678 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-db55fc494-vtkgg_09474b29-37f5-4e66-9314-6af690b94758/manager/0.log" Dec 05 11:15:52 crc kubenswrapper[4796]: I1205 11:15:52.378479 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-db55fc494-vtkgg_09474b29-37f5-4e66-9314-6af690b94758/kube-rbac-proxy/0.log" Dec 05 11:15:52 crc kubenswrapper[4796]: I1205 11:15:52.405179 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-86b7548d4c-d59d5_95efa747-ba05-4c2f-86a8-037452c66764/kube-rbac-proxy/0.log" Dec 05 11:15:52 crc kubenswrapper[4796]: I1205 11:15:52.508916 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-86b7548d4c-d59d5_95efa747-ba05-4c2f-86a8-037452c66764/manager/0.log" Dec 05 11:15:52 crc kubenswrapper[4796]: I1205 11:15:52.541001 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-64989647d4-6pkqv_1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96/kube-rbac-proxy/0.log" Dec 05 11:15:52 crc kubenswrapper[4796]: I1205 11:15:52.660017 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-7c55bc5499-tx2js_98b514bf-2dd0-4d60-9141-d70dead159cb/kube-rbac-proxy/0.log" Dec 05 11:15:52 crc kubenswrapper[4796]: I1205 11:15:52.663080 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-64989647d4-6pkqv_1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96/manager/0.log" Dec 05 11:15:52 crc kubenswrapper[4796]: I1205 11:15:52.721124 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-7c55bc5499-tx2js_98b514bf-2dd0-4d60-9141-d70dead159cb/manager/0.log" Dec 05 11:15:52 crc kubenswrapper[4796]: I1205 11:15:52.796049 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-847b767f55-wqhnd_8e886ef4-4f20-49e6-93d8-d011ac192923/kube-rbac-proxy/0.log" Dec 05 11:15:52 crc kubenswrapper[4796]: I1205 11:15:52.868703 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-847b767f55-wqhnd_8e886ef4-4f20-49e6-93d8-d011ac192923/manager/0.log" Dec 05 11:15:52 crc kubenswrapper[4796]: I1205 11:15:52.959996 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59c7d85948-v5lcv_ba227292-a494-43c0-9fbd-addbd8f48b6f/kube-rbac-proxy/0.log" Dec 05 11:15:52 crc kubenswrapper[4796]: I1205 11:15:52.987954 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59c7d85948-v5lcv_ba227292-a494-43c0-9fbd-addbd8f48b6f/manager/0.log" Dec 05 11:15:53 crc kubenswrapper[4796]: I1205 11:15:53.056038 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66b4f6f898-cqrd7_f92cf54c-1bcd-4a73-86b2-e4407908953d/kube-rbac-proxy/0.log" Dec 05 11:15:53 crc kubenswrapper[4796]: I1205 11:15:53.090046 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66b4f6f898-cqrd7_f92cf54c-1bcd-4a73-86b2-e4407908953d/manager/0.log" Dec 05 11:15:53 crc kubenswrapper[4796]: I1205 11:15:53.156956 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7f8bc7fb5-pm9cz_bfe3615a-d5c7-4ad2-9e1e-ac4ab279c64f/kube-rbac-proxy/0.log" Dec 05 11:15:53 crc kubenswrapper[4796]: I1205 11:15:53.227725 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7f8bc7fb5-pm9cz_bfe3615a-d5c7-4ad2-9e1e-ac4ab279c64f/manager/0.log" Dec 05 11:15:53 crc kubenswrapper[4796]: I1205 11:15:53.320616 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79b74dfcd4-mhcb5_91dc33f2-985f-41d9-8c36-4c37aed1ec16/kube-rbac-proxy/0.log" Dec 05 11:15:53 crc kubenswrapper[4796]: I1205 11:15:53.343122 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79b74dfcd4-mhcb5_91dc33f2-985f-41d9-8c36-4c37aed1ec16/manager/0.log" Dec 05 11:15:53 crc kubenswrapper[4796]: I1205 11:15:53.383055 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6869548bb4-wsr9d_845825d1-623f-4e06-9f2c-d045910eee1a/kube-rbac-proxy/0.log" Dec 05 11:15:53 crc kubenswrapper[4796]: I1205 11:15:53.474159 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6869548bb4-wsr9d_845825d1-623f-4e06-9f2c-d045910eee1a/manager/0.log" Dec 05 11:15:53 crc kubenswrapper[4796]: I1205 11:15:53.520253 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8667b5c969stxrl_9314b19e-3947-4091-af58-82275f696602/kube-rbac-proxy/0.log" Dec 05 11:15:53 crc kubenswrapper[4796]: I1205 11:15:53.560403 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8667b5c969stxrl_9314b19e-3947-4091-af58-82275f696602/manager/0.log" Dec 05 11:15:53 crc kubenswrapper[4796]: I1205 11:15:53.642950 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-9989f4965-wmbfg_b9a78df2-ecf6-425a-ace1-f005622e0025/kube-rbac-proxy/0.log" Dec 05 11:15:53 crc kubenswrapper[4796]: I1205 11:15:53.903344 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6b8c9fb9c8-7slmm_914b4688-6153-4282-8828-65d9500a53bf/kube-rbac-proxy/0.log" Dec 05 11:15:53 crc kubenswrapper[4796]: I1205 11:15:53.969422 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6b8c9fb9c8-7slmm_914b4688-6153-4282-8828-65d9500a53bf/operator/0.log" Dec 05 11:15:54 crc kubenswrapper[4796]: I1205 11:15:54.123213 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-gk94f_cc045b46-7b1a-4a74-9bcc-9fdf067dbf3d/registry-server/0.log" Dec 05 11:15:54 crc kubenswrapper[4796]: I1205 11:15:54.152703 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-59f9cfd57b-c586s_a1fec903-f9b8-49e1-a4f0-1526dcff64ea/kube-rbac-proxy/0.log" Dec 05 11:15:54 crc kubenswrapper[4796]: I1205 11:15:54.419436 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-59f9cfd57b-c586s_a1fec903-f9b8-49e1-a4f0-1526dcff64ea/manager/0.log" Dec 05 11:15:54 crc kubenswrapper[4796]: I1205 11:15:54.440453 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-589d6b8ccb-h27pk_b566972c-0250-4692-8152-31dc732b4147/kube-rbac-proxy/0.log" Dec 05 11:15:54 crc kubenswrapper[4796]: I1205 11:15:54.602820 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-589d6b8ccb-h27pk_b566972c-0250-4692-8152-31dc732b4147/manager/0.log" Dec 05 11:15:54 crc kubenswrapper[4796]: I1205 11:15:54.659974 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-mlsnn_ae34e165-a87d-4395-99c5-1a9f7129e6fe/operator/0.log" Dec 05 11:15:54 crc kubenswrapper[4796]: I1205 11:15:54.771696 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-9989f4965-wmbfg_b9a78df2-ecf6-425a-ace1-f005622e0025/manager/0.log" Dec 05 11:15:54 crc kubenswrapper[4796]: I1205 11:15:54.801072 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5bf496986d-rfkkm_e83951e6-5692-458d-aeba-ae8e6e8cfdd5/kube-rbac-proxy/0.log" Dec 05 11:15:54 crc kubenswrapper[4796]: I1205 11:15:54.832774 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5bf496986d-rfkkm_e83951e6-5692-458d-aeba-ae8e6e8cfdd5/manager/0.log" Dec 05 11:15:54 crc kubenswrapper[4796]: I1205 11:15:54.874588 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-75f49469b9-rs7fq_622bf26a-5bd4-4936-bd06-ae5ec514f130/kube-rbac-proxy/0.log" Dec 05 11:15:54 crc kubenswrapper[4796]: I1205 11:15:54.991735 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-75f49469b9-rs7fq_622bf26a-5bd4-4936-bd06-ae5ec514f130/manager/0.log" Dec 05 11:15:54 crc kubenswrapper[4796]: I1205 11:15:54.999640 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd78796cb-gcttm_aa5e433c-e704-4cbf-8db3-6efe20814f65/kube-rbac-proxy/0.log" Dec 05 11:15:55 crc kubenswrapper[4796]: I1205 11:15:55.036330 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd78796cb-gcttm_aa5e433c-e704-4cbf-8db3-6efe20814f65/manager/0.log" Dec 05 11:15:55 crc kubenswrapper[4796]: I1205 11:15:55.138493 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-784c978c5-v2fgq_a9f4475f-8ecc-4bc3-a195-e5cf592a1324/kube-rbac-proxy/0.log" Dec 05 11:15:55 crc kubenswrapper[4796]: I1205 11:15:55.152633 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-784c978c5-v2fgq_a9f4475f-8ecc-4bc3-a195-e5cf592a1324/manager/0.log" Dec 05 11:16:01 crc kubenswrapper[4796]: I1205 11:16:01.031741 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:16:01 crc kubenswrapper[4796]: E1205 11:16:01.032388 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:16:07 crc kubenswrapper[4796]: I1205 11:16:07.776305 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-dktck_766637c3-bd89-4f55-950e-68c553a5c6a4/control-plane-machine-set-operator/0.log" Dec 05 11:16:07 crc kubenswrapper[4796]: I1205 11:16:07.923312 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-z2zft_77837609-281b-417a-8398-7732463eb92a/kube-rbac-proxy/0.log" Dec 05 11:16:07 crc kubenswrapper[4796]: I1205 11:16:07.947721 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-z2zft_77837609-281b-417a-8398-7732463eb92a/machine-api-operator/0.log" Dec 05 11:16:15 crc kubenswrapper[4796]: I1205 11:16:15.031365 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:16:15 crc kubenswrapper[4796]: E1205 11:16:15.032511 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:16:18 crc kubenswrapper[4796]: I1205 11:16:18.106141 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-znnp5_4fbc7cf7-cc54-4a42-af7c-7c7451d7bfc0/cert-manager-controller/0.log" Dec 05 11:16:18 crc kubenswrapper[4796]: I1205 11:16:18.255393 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-qvqsf_55906207-dd26-4827-a801-1808e140a903/cert-manager-cainjector/0.log" Dec 05 11:16:18 crc kubenswrapper[4796]: I1205 11:16:18.288203 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-kgfhs_dd1a1054-f7c6-4515-9c64-074ce87c169f/cert-manager-webhook/0.log" Dec 05 11:16:26 crc kubenswrapper[4796]: I1205 11:16:26.749153 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-7zkrr_5df58dfa-f2af-439b-bd54-5253e8804e10/nmstate-console-plugin/0.log" Dec 05 11:16:26 crc kubenswrapper[4796]: I1205 11:16:26.889014 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-nmcxh_3d6ead77-e785-46e2-a9d8-1cb1bf83ae85/nmstate-handler/0.log" Dec 05 11:16:26 crc kubenswrapper[4796]: I1205 11:16:26.914398 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-wlc9g_6dbc6bec-7e30-4b6e-9db8-b0ea8f210baa/kube-rbac-proxy/0.log" Dec 05 11:16:26 crc kubenswrapper[4796]: I1205 11:16:26.952061 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-wlc9g_6dbc6bec-7e30-4b6e-9db8-b0ea8f210baa/nmstate-metrics/0.log" Dec 05 11:16:27 crc kubenswrapper[4796]: I1205 11:16:27.054404 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-8pj5q_f9217359-3b50-4237-bff8-75ff7eebf333/nmstate-operator/0.log" Dec 05 11:16:27 crc kubenswrapper[4796]: I1205 11:16:27.087118 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-sflfc_b69ee833-32fa-4d5f-b561-38b4e4a89a58/nmstate-webhook/0.log" Dec 05 11:16:28 crc kubenswrapper[4796]: I1205 11:16:28.031513 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:16:28 crc kubenswrapper[4796]: E1205 11:16:28.031761 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:16:38 crc kubenswrapper[4796]: I1205 11:16:38.010552 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-rnnld_131fd9f0-c98b-45a8-9443-fb22ab2c6c28/kube-rbac-proxy/0.log" Dec 05 11:16:38 crc kubenswrapper[4796]: I1205 11:16:38.064122 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-rnnld_131fd9f0-c98b-45a8-9443-fb22ab2c6c28/controller/0.log" Dec 05 11:16:38 crc kubenswrapper[4796]: I1205 11:16:38.169709 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-frr-files/0.log" Dec 05 11:16:38 crc kubenswrapper[4796]: I1205 11:16:38.346165 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-frr-files/0.log" Dec 05 11:16:38 crc kubenswrapper[4796]: I1205 11:16:38.348041 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-reloader/0.log" Dec 05 11:16:38 crc kubenswrapper[4796]: I1205 11:16:38.356361 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-metrics/0.log" Dec 05 11:16:38 crc kubenswrapper[4796]: I1205 11:16:38.397105 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-reloader/0.log" Dec 05 11:16:38 crc kubenswrapper[4796]: I1205 11:16:38.528234 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-reloader/0.log" Dec 05 11:16:38 crc kubenswrapper[4796]: I1205 11:16:38.533524 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-frr-files/0.log" Dec 05 11:16:38 crc kubenswrapper[4796]: I1205 11:16:38.571458 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-metrics/0.log" Dec 05 11:16:38 crc kubenswrapper[4796]: I1205 11:16:38.587089 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-metrics/0.log" Dec 05 11:16:38 crc kubenswrapper[4796]: I1205 11:16:38.716335 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-reloader/0.log" Dec 05 11:16:38 crc kubenswrapper[4796]: I1205 11:16:38.726378 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-frr-files/0.log" Dec 05 11:16:38 crc kubenswrapper[4796]: I1205 11:16:38.731932 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-metrics/0.log" Dec 05 11:16:38 crc kubenswrapper[4796]: I1205 11:16:38.740095 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/controller/0.log" Dec 05 11:16:38 crc kubenswrapper[4796]: I1205 11:16:38.868149 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/kube-rbac-proxy/0.log" Dec 05 11:16:38 crc kubenswrapper[4796]: I1205 11:16:38.875119 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/frr-metrics/0.log" Dec 05 11:16:38 crc kubenswrapper[4796]: I1205 11:16:38.930952 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/kube-rbac-proxy-frr/0.log" Dec 05 11:16:39 crc kubenswrapper[4796]: I1205 11:16:39.069132 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/reloader/0.log" Dec 05 11:16:39 crc kubenswrapper[4796]: I1205 11:16:39.117083 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-cvrqt_e62fb2ad-7ed8-4e09-bab4-9a5ea8b08610/frr-k8s-webhook-server/0.log" Dec 05 11:16:39 crc kubenswrapper[4796]: I1205 11:16:39.341771 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5848764dff-zxvnb_0a2ba1fb-8eb8-497a-8f31-931aca49243e/manager/0.log" Dec 05 11:16:39 crc kubenswrapper[4796]: I1205 11:16:39.438271 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-695488f64b-2qxth_249e98a9-0d33-4f5c-8102-03e46de4d20e/webhook-server/0.log" Dec 05 11:16:39 crc kubenswrapper[4796]: I1205 11:16:39.549707 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-q4znh_ad3da70e-b17d-414e-a68f-197abce5d6fe/kube-rbac-proxy/0.log" Dec 05 11:16:40 crc kubenswrapper[4796]: I1205 11:16:40.076109 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-q4znh_ad3da70e-b17d-414e-a68f-197abce5d6fe/speaker/0.log" Dec 05 11:16:40 crc kubenswrapper[4796]: I1205 11:16:40.141471 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/frr/0.log" Dec 05 11:16:43 crc kubenswrapper[4796]: I1205 11:16:43.033359 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:16:43 crc kubenswrapper[4796]: E1205 11:16:43.034545 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:16:50 crc kubenswrapper[4796]: I1205 11:16:50.538859 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd_f393972a-bae2-4076-bc21-2f9f67d12875/util/0.log" Dec 05 11:16:50 crc kubenswrapper[4796]: I1205 11:16:50.709514 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd_f393972a-bae2-4076-bc21-2f9f67d12875/util/0.log" Dec 05 11:16:50 crc kubenswrapper[4796]: I1205 11:16:50.709777 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd_f393972a-bae2-4076-bc21-2f9f67d12875/pull/0.log" Dec 05 11:16:50 crc kubenswrapper[4796]: I1205 11:16:50.709902 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd_f393972a-bae2-4076-bc21-2f9f67d12875/pull/0.log" Dec 05 11:16:50 crc kubenswrapper[4796]: I1205 11:16:50.854569 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd_f393972a-bae2-4076-bc21-2f9f67d12875/util/0.log" Dec 05 11:16:50 crc kubenswrapper[4796]: I1205 11:16:50.866390 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd_f393972a-bae2-4076-bc21-2f9f67d12875/pull/0.log" Dec 05 11:16:50 crc kubenswrapper[4796]: I1205 11:16:50.867121 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd_f393972a-bae2-4076-bc21-2f9f67d12875/extract/0.log" Dec 05 11:16:51 crc kubenswrapper[4796]: I1205 11:16:51.007191 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj_1aa954fc-98a6-42f8-b3c5-629859212fab/util/0.log" Dec 05 11:16:51 crc kubenswrapper[4796]: I1205 11:16:51.163265 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj_1aa954fc-98a6-42f8-b3c5-629859212fab/util/0.log" Dec 05 11:16:51 crc kubenswrapper[4796]: I1205 11:16:51.185784 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj_1aa954fc-98a6-42f8-b3c5-629859212fab/pull/0.log" Dec 05 11:16:51 crc kubenswrapper[4796]: I1205 11:16:51.212121 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj_1aa954fc-98a6-42f8-b3c5-629859212fab/pull/0.log" Dec 05 11:16:51 crc kubenswrapper[4796]: I1205 11:16:51.348137 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj_1aa954fc-98a6-42f8-b3c5-629859212fab/extract/0.log" Dec 05 11:16:51 crc kubenswrapper[4796]: I1205 11:16:51.358332 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj_1aa954fc-98a6-42f8-b3c5-629859212fab/util/0.log" Dec 05 11:16:51 crc kubenswrapper[4796]: I1205 11:16:51.362651 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj_1aa954fc-98a6-42f8-b3c5-629859212fab/pull/0.log" Dec 05 11:16:51 crc kubenswrapper[4796]: I1205 11:16:51.524471 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gklxw_a6dc75b2-1e17-4aeb-a328-b430bd9e33a7/extract-utilities/0.log" Dec 05 11:16:51 crc kubenswrapper[4796]: I1205 11:16:51.671775 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gklxw_a6dc75b2-1e17-4aeb-a328-b430bd9e33a7/extract-content/0.log" Dec 05 11:16:51 crc kubenswrapper[4796]: I1205 11:16:51.672304 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gklxw_a6dc75b2-1e17-4aeb-a328-b430bd9e33a7/extract-content/0.log" Dec 05 11:16:51 crc kubenswrapper[4796]: I1205 11:16:51.676478 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gklxw_a6dc75b2-1e17-4aeb-a328-b430bd9e33a7/extract-utilities/0.log" Dec 05 11:16:51 crc kubenswrapper[4796]: I1205 11:16:51.831567 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gklxw_a6dc75b2-1e17-4aeb-a328-b430bd9e33a7/extract-content/0.log" Dec 05 11:16:51 crc kubenswrapper[4796]: I1205 11:16:51.831655 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gklxw_a6dc75b2-1e17-4aeb-a328-b430bd9e33a7/extract-utilities/0.log" Dec 05 11:16:52 crc kubenswrapper[4796]: I1205 11:16:52.062891 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p9cbq_b84a3c6f-5d02-4cc3-8644-14c368436983/extract-utilities/0.log" Dec 05 11:16:52 crc kubenswrapper[4796]: I1205 11:16:52.148253 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gklxw_a6dc75b2-1e17-4aeb-a328-b430bd9e33a7/registry-server/0.log" Dec 05 11:16:52 crc kubenswrapper[4796]: I1205 11:16:52.177437 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p9cbq_b84a3c6f-5d02-4cc3-8644-14c368436983/extract-content/0.log" Dec 05 11:16:52 crc kubenswrapper[4796]: I1205 11:16:52.231740 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p9cbq_b84a3c6f-5d02-4cc3-8644-14c368436983/extract-utilities/0.log" Dec 05 11:16:52 crc kubenswrapper[4796]: I1205 11:16:52.256213 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p9cbq_b84a3c6f-5d02-4cc3-8644-14c368436983/extract-content/0.log" Dec 05 11:16:52 crc kubenswrapper[4796]: I1205 11:16:52.390721 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p9cbq_b84a3c6f-5d02-4cc3-8644-14c368436983/extract-utilities/0.log" Dec 05 11:16:52 crc kubenswrapper[4796]: I1205 11:16:52.425362 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p9cbq_b84a3c6f-5d02-4cc3-8644-14c368436983/extract-content/0.log" Dec 05 11:16:52 crc kubenswrapper[4796]: I1205 11:16:52.626249 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-fdddq_c6b04e09-09cc-4ca6-a5bd-61a46535f226/marketplace-operator/0.log" Dec 05 11:16:52 crc kubenswrapper[4796]: I1205 11:16:52.752063 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qvw7s_cc02dcd1-0779-4692-8d9a-78bd5cefa3ea/extract-utilities/0.log" Dec 05 11:16:52 crc kubenswrapper[4796]: I1205 11:16:52.825552 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p9cbq_b84a3c6f-5d02-4cc3-8644-14c368436983/registry-server/0.log" Dec 05 11:16:52 crc kubenswrapper[4796]: I1205 11:16:52.929779 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qvw7s_cc02dcd1-0779-4692-8d9a-78bd5cefa3ea/extract-content/0.log" Dec 05 11:16:52 crc kubenswrapper[4796]: I1205 11:16:52.938744 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qvw7s_cc02dcd1-0779-4692-8d9a-78bd5cefa3ea/extract-content/0.log" Dec 05 11:16:52 crc kubenswrapper[4796]: I1205 11:16:52.959313 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qvw7s_cc02dcd1-0779-4692-8d9a-78bd5cefa3ea/extract-utilities/0.log" Dec 05 11:16:53 crc kubenswrapper[4796]: I1205 11:16:53.136267 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qvw7s_cc02dcd1-0779-4692-8d9a-78bd5cefa3ea/extract-content/0.log" Dec 05 11:16:53 crc kubenswrapper[4796]: I1205 11:16:53.141249 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qvw7s_cc02dcd1-0779-4692-8d9a-78bd5cefa3ea/extract-utilities/0.log" Dec 05 11:16:53 crc kubenswrapper[4796]: I1205 11:16:53.228087 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qvw7s_cc02dcd1-0779-4692-8d9a-78bd5cefa3ea/registry-server/0.log" Dec 05 11:16:53 crc kubenswrapper[4796]: I1205 11:16:53.281505 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fj572_05818125-1b72-4058-ace6-04c114506db0/extract-utilities/0.log" Dec 05 11:16:53 crc kubenswrapper[4796]: I1205 11:16:53.460103 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fj572_05818125-1b72-4058-ace6-04c114506db0/extract-content/0.log" Dec 05 11:16:53 crc kubenswrapper[4796]: I1205 11:16:53.464816 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fj572_05818125-1b72-4058-ace6-04c114506db0/extract-content/0.log" Dec 05 11:16:53 crc kubenswrapper[4796]: I1205 11:16:53.480857 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fj572_05818125-1b72-4058-ace6-04c114506db0/extract-utilities/0.log" Dec 05 11:16:53 crc kubenswrapper[4796]: I1205 11:16:53.622097 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fj572_05818125-1b72-4058-ace6-04c114506db0/extract-utilities/0.log" Dec 05 11:16:53 crc kubenswrapper[4796]: I1205 11:16:53.641038 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fj572_05818125-1b72-4058-ace6-04c114506db0/extract-content/0.log" Dec 05 11:16:54 crc kubenswrapper[4796]: I1205 11:16:54.007255 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fj572_05818125-1b72-4058-ace6-04c114506db0/registry-server/0.log" Dec 05 11:16:58 crc kubenswrapper[4796]: I1205 11:16:58.031514 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:16:58 crc kubenswrapper[4796]: E1205 11:16:58.032486 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:17:13 crc kubenswrapper[4796]: I1205 11:17:13.031991 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:17:13 crc kubenswrapper[4796]: E1205 11:17:13.033202 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:17:20 crc kubenswrapper[4796]: E1205 11:17:20.920434 4796 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.25.20:47520->192.168.25.20:43473: write tcp 192.168.25.20:47520->192.168.25.20:43473: write: broken pipe Dec 05 11:17:25 crc kubenswrapper[4796]: I1205 11:17:25.032458 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:17:25 crc kubenswrapper[4796]: E1205 11:17:25.033795 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:17:38 crc kubenswrapper[4796]: I1205 11:17:38.031964 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:17:38 crc kubenswrapper[4796]: E1205 11:17:38.033319 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:17:53 crc kubenswrapper[4796]: I1205 11:17:53.031936 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:17:53 crc kubenswrapper[4796]: E1205 11:17:53.032919 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:18:07 crc kubenswrapper[4796]: I1205 11:18:07.031053 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:18:07 crc kubenswrapper[4796]: E1205 11:18:07.031814 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:18:18 crc kubenswrapper[4796]: I1205 11:18:18.794279 4796 generic.go:334] "Generic (PLEG): container finished" podID="20d3233f-c7b5-40d1-99c3-6e9a68a235c2" containerID="d604d82c81f7616f0130c115263d782a01141ca09f154cf40b96f28d45126c3c" exitCode=0 Dec 05 11:18:18 crc kubenswrapper[4796]: I1205 11:18:18.794352 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c8nrm/must-gather-ckzkb" event={"ID":"20d3233f-c7b5-40d1-99c3-6e9a68a235c2","Type":"ContainerDied","Data":"d604d82c81f7616f0130c115263d782a01141ca09f154cf40b96f28d45126c3c"} Dec 05 11:18:18 crc kubenswrapper[4796]: I1205 11:18:18.797399 4796 scope.go:117] "RemoveContainer" containerID="d604d82c81f7616f0130c115263d782a01141ca09f154cf40b96f28d45126c3c" Dec 05 11:18:19 crc kubenswrapper[4796]: I1205 11:18:19.031128 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:18:19 crc kubenswrapper[4796]: E1205 11:18:19.031904 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:18:19 crc kubenswrapper[4796]: I1205 11:18:19.551353 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-c8nrm_must-gather-ckzkb_20d3233f-c7b5-40d1-99c3-6e9a68a235c2/gather/0.log" Dec 05 11:18:26 crc kubenswrapper[4796]: I1205 11:18:26.980210 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-c8nrm/must-gather-ckzkb"] Dec 05 11:18:26 crc kubenswrapper[4796]: I1205 11:18:26.981348 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-c8nrm/must-gather-ckzkb" podUID="20d3233f-c7b5-40d1-99c3-6e9a68a235c2" containerName="copy" containerID="cri-o://779637d66d012fc5a0bbe06c3288c9091b7265da06c291efaecea4b0edd3f53a" gracePeriod=2 Dec 05 11:18:26 crc kubenswrapper[4796]: I1205 11:18:26.991286 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-c8nrm/must-gather-ckzkb"] Dec 05 11:18:27 crc kubenswrapper[4796]: I1205 11:18:27.342534 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-c8nrm_must-gather-ckzkb_20d3233f-c7b5-40d1-99c3-6e9a68a235c2/copy/0.log" Dec 05 11:18:27 crc kubenswrapper[4796]: I1205 11:18:27.343334 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c8nrm/must-gather-ckzkb" Dec 05 11:18:27 crc kubenswrapper[4796]: I1205 11:18:27.411159 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrznp\" (UniqueName: \"kubernetes.io/projected/20d3233f-c7b5-40d1-99c3-6e9a68a235c2-kube-api-access-hrznp\") pod \"20d3233f-c7b5-40d1-99c3-6e9a68a235c2\" (UID: \"20d3233f-c7b5-40d1-99c3-6e9a68a235c2\") " Dec 05 11:18:27 crc kubenswrapper[4796]: I1205 11:18:27.411241 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/20d3233f-c7b5-40d1-99c3-6e9a68a235c2-must-gather-output\") pod \"20d3233f-c7b5-40d1-99c3-6e9a68a235c2\" (UID: \"20d3233f-c7b5-40d1-99c3-6e9a68a235c2\") " Dec 05 11:18:27 crc kubenswrapper[4796]: I1205 11:18:27.417857 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20d3233f-c7b5-40d1-99c3-6e9a68a235c2-kube-api-access-hrznp" (OuterVolumeSpecName: "kube-api-access-hrznp") pod "20d3233f-c7b5-40d1-99c3-6e9a68a235c2" (UID: "20d3233f-c7b5-40d1-99c3-6e9a68a235c2"). InnerVolumeSpecName "kube-api-access-hrznp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:18:27 crc kubenswrapper[4796]: I1205 11:18:27.513423 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrznp\" (UniqueName: \"kubernetes.io/projected/20d3233f-c7b5-40d1-99c3-6e9a68a235c2-kube-api-access-hrznp\") on node \"crc\" DevicePath \"\"" Dec 05 11:18:27 crc kubenswrapper[4796]: I1205 11:18:27.532620 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20d3233f-c7b5-40d1-99c3-6e9a68a235c2-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "20d3233f-c7b5-40d1-99c3-6e9a68a235c2" (UID: "20d3233f-c7b5-40d1-99c3-6e9a68a235c2"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:18:27 crc kubenswrapper[4796]: I1205 11:18:27.616192 4796 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/20d3233f-c7b5-40d1-99c3-6e9a68a235c2-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 05 11:18:27 crc kubenswrapper[4796]: I1205 11:18:27.899327 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-c8nrm_must-gather-ckzkb_20d3233f-c7b5-40d1-99c3-6e9a68a235c2/copy/0.log" Dec 05 11:18:27 crc kubenswrapper[4796]: I1205 11:18:27.899623 4796 generic.go:334] "Generic (PLEG): container finished" podID="20d3233f-c7b5-40d1-99c3-6e9a68a235c2" containerID="779637d66d012fc5a0bbe06c3288c9091b7265da06c291efaecea4b0edd3f53a" exitCode=143 Dec 05 11:18:27 crc kubenswrapper[4796]: I1205 11:18:27.899674 4796 scope.go:117] "RemoveContainer" containerID="779637d66d012fc5a0bbe06c3288c9091b7265da06c291efaecea4b0edd3f53a" Dec 05 11:18:27 crc kubenswrapper[4796]: I1205 11:18:27.899704 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c8nrm/must-gather-ckzkb" Dec 05 11:18:27 crc kubenswrapper[4796]: I1205 11:18:27.916891 4796 scope.go:117] "RemoveContainer" containerID="d604d82c81f7616f0130c115263d782a01141ca09f154cf40b96f28d45126c3c" Dec 05 11:18:27 crc kubenswrapper[4796]: I1205 11:18:27.977192 4796 scope.go:117] "RemoveContainer" containerID="779637d66d012fc5a0bbe06c3288c9091b7265da06c291efaecea4b0edd3f53a" Dec 05 11:18:27 crc kubenswrapper[4796]: E1205 11:18:27.977742 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"779637d66d012fc5a0bbe06c3288c9091b7265da06c291efaecea4b0edd3f53a\": container with ID starting with 779637d66d012fc5a0bbe06c3288c9091b7265da06c291efaecea4b0edd3f53a not found: ID does not exist" containerID="779637d66d012fc5a0bbe06c3288c9091b7265da06c291efaecea4b0edd3f53a" Dec 05 11:18:27 crc kubenswrapper[4796]: I1205 11:18:27.977782 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"779637d66d012fc5a0bbe06c3288c9091b7265da06c291efaecea4b0edd3f53a"} err="failed to get container status \"779637d66d012fc5a0bbe06c3288c9091b7265da06c291efaecea4b0edd3f53a\": rpc error: code = NotFound desc = could not find container \"779637d66d012fc5a0bbe06c3288c9091b7265da06c291efaecea4b0edd3f53a\": container with ID starting with 779637d66d012fc5a0bbe06c3288c9091b7265da06c291efaecea4b0edd3f53a not found: ID does not exist" Dec 05 11:18:27 crc kubenswrapper[4796]: I1205 11:18:27.977806 4796 scope.go:117] "RemoveContainer" containerID="d604d82c81f7616f0130c115263d782a01141ca09f154cf40b96f28d45126c3c" Dec 05 11:18:27 crc kubenswrapper[4796]: E1205 11:18:27.978389 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d604d82c81f7616f0130c115263d782a01141ca09f154cf40b96f28d45126c3c\": container with ID starting with d604d82c81f7616f0130c115263d782a01141ca09f154cf40b96f28d45126c3c not found: ID does not exist" containerID="d604d82c81f7616f0130c115263d782a01141ca09f154cf40b96f28d45126c3c" Dec 05 11:18:27 crc kubenswrapper[4796]: I1205 11:18:27.978451 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d604d82c81f7616f0130c115263d782a01141ca09f154cf40b96f28d45126c3c"} err="failed to get container status \"d604d82c81f7616f0130c115263d782a01141ca09f154cf40b96f28d45126c3c\": rpc error: code = NotFound desc = could not find container \"d604d82c81f7616f0130c115263d782a01141ca09f154cf40b96f28d45126c3c\": container with ID starting with d604d82c81f7616f0130c115263d782a01141ca09f154cf40b96f28d45126c3c not found: ID does not exist" Dec 05 11:18:28 crc kubenswrapper[4796]: I1205 11:18:28.039927 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20d3233f-c7b5-40d1-99c3-6e9a68a235c2" path="/var/lib/kubelet/pods/20d3233f-c7b5-40d1-99c3-6e9a68a235c2/volumes" Dec 05 11:18:31 crc kubenswrapper[4796]: I1205 11:18:31.031540 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:18:31 crc kubenswrapper[4796]: E1205 11:18:31.032150 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:18:45 crc kubenswrapper[4796]: I1205 11:18:45.031057 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:18:45 crc kubenswrapper[4796]: E1205 11:18:45.032648 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:18:59 crc kubenswrapper[4796]: I1205 11:18:59.031703 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:18:59 crc kubenswrapper[4796]: E1205 11:18:59.032807 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:19:10 crc kubenswrapper[4796]: I1205 11:19:10.032100 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:19:10 crc kubenswrapper[4796]: E1205 11:19:10.033411 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:19:21 crc kubenswrapper[4796]: I1205 11:19:21.031554 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:19:21 crc kubenswrapper[4796]: E1205 11:19:21.032297 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:19:34 crc kubenswrapper[4796]: I1205 11:19:34.036125 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:19:34 crc kubenswrapper[4796]: E1205 11:19:34.037128 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:19:48 crc kubenswrapper[4796]: I1205 11:19:48.032113 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:19:48 crc kubenswrapper[4796]: E1205 11:19:48.033098 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:20:02 crc kubenswrapper[4796]: I1205 11:20:02.031319 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:20:02 crc kubenswrapper[4796]: E1205 11:20:02.032150 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:20:16 crc kubenswrapper[4796]: I1205 11:20:16.031374 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:20:16 crc kubenswrapper[4796]: E1205 11:20:16.032141 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:20:31 crc kubenswrapper[4796]: I1205 11:20:31.031254 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:20:31 crc kubenswrapper[4796]: E1205 11:20:31.031989 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:20:44 crc kubenswrapper[4796]: I1205 11:20:44.039247 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:20:44 crc kubenswrapper[4796]: I1205 11:20:44.291830 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerStarted","Data":"33bec1cb577def4fbb5cf2e2c123c52030235cb0ae4e0e573704cf5ac833978e"} Dec 05 11:20:49 crc kubenswrapper[4796]: I1205 11:20:49.462156 4796 scope.go:117] "RemoveContainer" containerID="f641734d8f5af51af59654ac268548f2bd2d3363eb96ff25e523ac6d0f22c47e" Dec 05 11:20:50 crc kubenswrapper[4796]: I1205 11:20:50.682453 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vbfc7/must-gather-pwxxs"] Dec 05 11:20:50 crc kubenswrapper[4796]: E1205 11:20:50.683290 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20d3233f-c7b5-40d1-99c3-6e9a68a235c2" containerName="copy" Dec 05 11:20:50 crc kubenswrapper[4796]: I1205 11:20:50.683307 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d3233f-c7b5-40d1-99c3-6e9a68a235c2" containerName="copy" Dec 05 11:20:50 crc kubenswrapper[4796]: E1205 11:20:50.683322 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bd5c4f6-17ec-4b96-af71-718928571298" containerName="container-00" Dec 05 11:20:50 crc kubenswrapper[4796]: I1205 11:20:50.683328 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bd5c4f6-17ec-4b96-af71-718928571298" containerName="container-00" Dec 05 11:20:50 crc kubenswrapper[4796]: E1205 11:20:50.683336 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20d3233f-c7b5-40d1-99c3-6e9a68a235c2" containerName="gather" Dec 05 11:20:50 crc kubenswrapper[4796]: I1205 11:20:50.683344 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d3233f-c7b5-40d1-99c3-6e9a68a235c2" containerName="gather" Dec 05 11:20:50 crc kubenswrapper[4796]: I1205 11:20:50.683607 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="20d3233f-c7b5-40d1-99c3-6e9a68a235c2" containerName="copy" Dec 05 11:20:50 crc kubenswrapper[4796]: I1205 11:20:50.683625 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bd5c4f6-17ec-4b96-af71-718928571298" containerName="container-00" Dec 05 11:20:50 crc kubenswrapper[4796]: I1205 11:20:50.683635 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="20d3233f-c7b5-40d1-99c3-6e9a68a235c2" containerName="gather" Dec 05 11:20:50 crc kubenswrapper[4796]: I1205 11:20:50.685017 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vbfc7/must-gather-pwxxs" Dec 05 11:20:50 crc kubenswrapper[4796]: I1205 11:20:50.695812 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vbfc7"/"openshift-service-ca.crt" Dec 05 11:20:50 crc kubenswrapper[4796]: I1205 11:20:50.695867 4796 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vbfc7"/"kube-root-ca.crt" Dec 05 11:20:50 crc kubenswrapper[4796]: I1205 11:20:50.698646 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vbfc7/must-gather-pwxxs"] Dec 05 11:20:50 crc kubenswrapper[4796]: I1205 11:20:50.860747 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/86b2edc7-4753-4ca7-bb2d-1439505ffd8e-must-gather-output\") pod \"must-gather-pwxxs\" (UID: \"86b2edc7-4753-4ca7-bb2d-1439505ffd8e\") " pod="openshift-must-gather-vbfc7/must-gather-pwxxs" Dec 05 11:20:50 crc kubenswrapper[4796]: I1205 11:20:50.860899 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv982\" (UniqueName: \"kubernetes.io/projected/86b2edc7-4753-4ca7-bb2d-1439505ffd8e-kube-api-access-qv982\") pod \"must-gather-pwxxs\" (UID: \"86b2edc7-4753-4ca7-bb2d-1439505ffd8e\") " pod="openshift-must-gather-vbfc7/must-gather-pwxxs" Dec 05 11:20:50 crc kubenswrapper[4796]: I1205 11:20:50.962899 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/86b2edc7-4753-4ca7-bb2d-1439505ffd8e-must-gather-output\") pod \"must-gather-pwxxs\" (UID: \"86b2edc7-4753-4ca7-bb2d-1439505ffd8e\") " pod="openshift-must-gather-vbfc7/must-gather-pwxxs" Dec 05 11:20:50 crc kubenswrapper[4796]: I1205 11:20:50.963040 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv982\" (UniqueName: \"kubernetes.io/projected/86b2edc7-4753-4ca7-bb2d-1439505ffd8e-kube-api-access-qv982\") pod \"must-gather-pwxxs\" (UID: \"86b2edc7-4753-4ca7-bb2d-1439505ffd8e\") " pod="openshift-must-gather-vbfc7/must-gather-pwxxs" Dec 05 11:20:50 crc kubenswrapper[4796]: I1205 11:20:50.963295 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/86b2edc7-4753-4ca7-bb2d-1439505ffd8e-must-gather-output\") pod \"must-gather-pwxxs\" (UID: \"86b2edc7-4753-4ca7-bb2d-1439505ffd8e\") " pod="openshift-must-gather-vbfc7/must-gather-pwxxs" Dec 05 11:20:50 crc kubenswrapper[4796]: I1205 11:20:50.985948 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv982\" (UniqueName: \"kubernetes.io/projected/86b2edc7-4753-4ca7-bb2d-1439505ffd8e-kube-api-access-qv982\") pod \"must-gather-pwxxs\" (UID: \"86b2edc7-4753-4ca7-bb2d-1439505ffd8e\") " pod="openshift-must-gather-vbfc7/must-gather-pwxxs" Dec 05 11:20:51 crc kubenswrapper[4796]: I1205 11:20:51.003205 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vbfc7/must-gather-pwxxs" Dec 05 11:20:51 crc kubenswrapper[4796]: I1205 11:20:51.447755 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vbfc7/must-gather-pwxxs"] Dec 05 11:20:52 crc kubenswrapper[4796]: I1205 11:20:52.373214 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vbfc7/must-gather-pwxxs" event={"ID":"86b2edc7-4753-4ca7-bb2d-1439505ffd8e","Type":"ContainerStarted","Data":"5f9dfc3538dff1c102b7637c399e249da00e980aeb071110a91d329d4a7f46d6"} Dec 05 11:20:52 crc kubenswrapper[4796]: I1205 11:20:52.373929 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vbfc7/must-gather-pwxxs" event={"ID":"86b2edc7-4753-4ca7-bb2d-1439505ffd8e","Type":"ContainerStarted","Data":"c3a181f9c510289ffe1e650731af5185874a11a05a34f2efb69c1b682eb13ee4"} Dec 05 11:20:52 crc kubenswrapper[4796]: I1205 11:20:52.373945 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vbfc7/must-gather-pwxxs" event={"ID":"86b2edc7-4753-4ca7-bb2d-1439505ffd8e","Type":"ContainerStarted","Data":"5d612472c9c6ff8bf823c03baaf3ffab9f4c163c00226f6f2cb3d1e0fa1a11b5"} Dec 05 11:20:52 crc kubenswrapper[4796]: I1205 11:20:52.403050 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vbfc7/must-gather-pwxxs" podStartSLOduration=2.40303222 podStartE2EDuration="2.40303222s" podCreationTimestamp="2025-12-05 11:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 11:20:52.397233067 +0000 UTC m=+3198.685338580" watchObservedRunningTime="2025-12-05 11:20:52.40303222 +0000 UTC m=+3198.691137734" Dec 05 11:20:54 crc kubenswrapper[4796]: E1205 11:20:54.442316 4796 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.25.20:43426->192.168.25.20:43473: write tcp 192.168.25.20:43426->192.168.25.20:43473: write: broken pipe Dec 05 11:20:54 crc kubenswrapper[4796]: I1205 11:20:54.985745 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vbfc7/crc-debug-2l4m9"] Dec 05 11:20:54 crc kubenswrapper[4796]: I1205 11:20:54.986877 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vbfc7/crc-debug-2l4m9" Dec 05 11:20:54 crc kubenswrapper[4796]: I1205 11:20:54.988504 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vbfc7"/"default-dockercfg-l72rh" Dec 05 11:20:55 crc kubenswrapper[4796]: I1205 11:20:55.150249 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/602e4a00-93f1-44a5-ab35-538b3007ea91-host\") pod \"crc-debug-2l4m9\" (UID: \"602e4a00-93f1-44a5-ab35-538b3007ea91\") " pod="openshift-must-gather-vbfc7/crc-debug-2l4m9" Dec 05 11:20:55 crc kubenswrapper[4796]: I1205 11:20:55.150426 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw6ff\" (UniqueName: \"kubernetes.io/projected/602e4a00-93f1-44a5-ab35-538b3007ea91-kube-api-access-nw6ff\") pod \"crc-debug-2l4m9\" (UID: \"602e4a00-93f1-44a5-ab35-538b3007ea91\") " pod="openshift-must-gather-vbfc7/crc-debug-2l4m9" Dec 05 11:20:55 crc kubenswrapper[4796]: I1205 11:20:55.253461 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/602e4a00-93f1-44a5-ab35-538b3007ea91-host\") pod \"crc-debug-2l4m9\" (UID: \"602e4a00-93f1-44a5-ab35-538b3007ea91\") " pod="openshift-must-gather-vbfc7/crc-debug-2l4m9" Dec 05 11:20:55 crc kubenswrapper[4796]: I1205 11:20:55.253546 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw6ff\" (UniqueName: \"kubernetes.io/projected/602e4a00-93f1-44a5-ab35-538b3007ea91-kube-api-access-nw6ff\") pod \"crc-debug-2l4m9\" (UID: \"602e4a00-93f1-44a5-ab35-538b3007ea91\") " pod="openshift-must-gather-vbfc7/crc-debug-2l4m9" Dec 05 11:20:55 crc kubenswrapper[4796]: I1205 11:20:55.253642 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/602e4a00-93f1-44a5-ab35-538b3007ea91-host\") pod \"crc-debug-2l4m9\" (UID: \"602e4a00-93f1-44a5-ab35-538b3007ea91\") " pod="openshift-must-gather-vbfc7/crc-debug-2l4m9" Dec 05 11:20:55 crc kubenswrapper[4796]: I1205 11:20:55.270651 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw6ff\" (UniqueName: \"kubernetes.io/projected/602e4a00-93f1-44a5-ab35-538b3007ea91-kube-api-access-nw6ff\") pod \"crc-debug-2l4m9\" (UID: \"602e4a00-93f1-44a5-ab35-538b3007ea91\") " pod="openshift-must-gather-vbfc7/crc-debug-2l4m9" Dec 05 11:20:55 crc kubenswrapper[4796]: I1205 11:20:55.304657 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vbfc7/crc-debug-2l4m9" Dec 05 11:20:55 crc kubenswrapper[4796]: I1205 11:20:55.397230 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vbfc7/crc-debug-2l4m9" event={"ID":"602e4a00-93f1-44a5-ab35-538b3007ea91","Type":"ContainerStarted","Data":"da21c7630de9ce159e129266a1809f1c08a9e5537b296fd2e67d600c7c8d6bf1"} Dec 05 11:20:56 crc kubenswrapper[4796]: I1205 11:20:56.406515 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vbfc7/crc-debug-2l4m9" event={"ID":"602e4a00-93f1-44a5-ab35-538b3007ea91","Type":"ContainerStarted","Data":"e17e25442fe6827023f4fa292a16e3d8fe282f75b90ba183f9d0ec2002526505"} Dec 05 11:20:56 crc kubenswrapper[4796]: I1205 11:20:56.424829 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vbfc7/crc-debug-2l4m9" podStartSLOduration=2.4248138089999998 podStartE2EDuration="2.424813809s" podCreationTimestamp="2025-12-05 11:20:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-05 11:20:56.420499658 +0000 UTC m=+3202.708605171" watchObservedRunningTime="2025-12-05 11:20:56.424813809 +0000 UTC m=+3202.712919322" Dec 05 11:21:09 crc kubenswrapper[4796]: I1205 11:21:09.884269 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mztpp"] Dec 05 11:21:09 crc kubenswrapper[4796]: I1205 11:21:09.888017 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mztpp" Dec 05 11:21:09 crc kubenswrapper[4796]: I1205 11:21:09.898391 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mztpp"] Dec 05 11:21:10 crc kubenswrapper[4796]: I1205 11:21:10.060553 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94cfb696-c683-479c-b637-f2d7f99e2ea3-catalog-content\") pod \"certified-operators-mztpp\" (UID: \"94cfb696-c683-479c-b637-f2d7f99e2ea3\") " pod="openshift-marketplace/certified-operators-mztpp" Dec 05 11:21:10 crc kubenswrapper[4796]: I1205 11:21:10.061264 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94cfb696-c683-479c-b637-f2d7f99e2ea3-utilities\") pod \"certified-operators-mztpp\" (UID: \"94cfb696-c683-479c-b637-f2d7f99e2ea3\") " pod="openshift-marketplace/certified-operators-mztpp" Dec 05 11:21:10 crc kubenswrapper[4796]: I1205 11:21:10.061305 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txzxd\" (UniqueName: \"kubernetes.io/projected/94cfb696-c683-479c-b637-f2d7f99e2ea3-kube-api-access-txzxd\") pod \"certified-operators-mztpp\" (UID: \"94cfb696-c683-479c-b637-f2d7f99e2ea3\") " pod="openshift-marketplace/certified-operators-mztpp" Dec 05 11:21:10 crc kubenswrapper[4796]: I1205 11:21:10.164057 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94cfb696-c683-479c-b637-f2d7f99e2ea3-utilities\") pod \"certified-operators-mztpp\" (UID: \"94cfb696-c683-479c-b637-f2d7f99e2ea3\") " pod="openshift-marketplace/certified-operators-mztpp" Dec 05 11:21:10 crc kubenswrapper[4796]: I1205 11:21:10.164126 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txzxd\" (UniqueName: \"kubernetes.io/projected/94cfb696-c683-479c-b637-f2d7f99e2ea3-kube-api-access-txzxd\") pod \"certified-operators-mztpp\" (UID: \"94cfb696-c683-479c-b637-f2d7f99e2ea3\") " pod="openshift-marketplace/certified-operators-mztpp" Dec 05 11:21:10 crc kubenswrapper[4796]: I1205 11:21:10.164182 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94cfb696-c683-479c-b637-f2d7f99e2ea3-catalog-content\") pod \"certified-operators-mztpp\" (UID: \"94cfb696-c683-479c-b637-f2d7f99e2ea3\") " pod="openshift-marketplace/certified-operators-mztpp" Dec 05 11:21:10 crc kubenswrapper[4796]: I1205 11:21:10.165163 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94cfb696-c683-479c-b637-f2d7f99e2ea3-utilities\") pod \"certified-operators-mztpp\" (UID: \"94cfb696-c683-479c-b637-f2d7f99e2ea3\") " pod="openshift-marketplace/certified-operators-mztpp" Dec 05 11:21:10 crc kubenswrapper[4796]: I1205 11:21:10.165968 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94cfb696-c683-479c-b637-f2d7f99e2ea3-catalog-content\") pod \"certified-operators-mztpp\" (UID: \"94cfb696-c683-479c-b637-f2d7f99e2ea3\") " pod="openshift-marketplace/certified-operators-mztpp" Dec 05 11:21:10 crc kubenswrapper[4796]: I1205 11:21:10.184081 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txzxd\" (UniqueName: \"kubernetes.io/projected/94cfb696-c683-479c-b637-f2d7f99e2ea3-kube-api-access-txzxd\") pod \"certified-operators-mztpp\" (UID: \"94cfb696-c683-479c-b637-f2d7f99e2ea3\") " pod="openshift-marketplace/certified-operators-mztpp" Dec 05 11:21:10 crc kubenswrapper[4796]: I1205 11:21:10.204994 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mztpp" Dec 05 11:21:10 crc kubenswrapper[4796]: I1205 11:21:10.672196 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mztpp"] Dec 05 11:21:10 crc kubenswrapper[4796]: W1205 11:21:10.680033 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94cfb696_c683_479c_b637_f2d7f99e2ea3.slice/crio-1e475786bfc074ce5ef818fabc350753214be9ea8efab5cd7979c4ec0d8f1e12 WatchSource:0}: Error finding container 1e475786bfc074ce5ef818fabc350753214be9ea8efab5cd7979c4ec0d8f1e12: Status 404 returned error can't find the container with id 1e475786bfc074ce5ef818fabc350753214be9ea8efab5cd7979c4ec0d8f1e12 Dec 05 11:21:11 crc kubenswrapper[4796]: I1205 11:21:11.524774 4796 generic.go:334] "Generic (PLEG): container finished" podID="94cfb696-c683-479c-b637-f2d7f99e2ea3" containerID="276d9cc5b0d5cce59cd7c671ab6b5c58842de90e873b4d035f566be9403ca960" exitCode=0 Dec 05 11:21:11 crc kubenswrapper[4796]: I1205 11:21:11.524971 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mztpp" event={"ID":"94cfb696-c683-479c-b637-f2d7f99e2ea3","Type":"ContainerDied","Data":"276d9cc5b0d5cce59cd7c671ab6b5c58842de90e873b4d035f566be9403ca960"} Dec 05 11:21:11 crc kubenswrapper[4796]: I1205 11:21:11.525201 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mztpp" event={"ID":"94cfb696-c683-479c-b637-f2d7f99e2ea3","Type":"ContainerStarted","Data":"1e475786bfc074ce5ef818fabc350753214be9ea8efab5cd7979c4ec0d8f1e12"} Dec 05 11:21:11 crc kubenswrapper[4796]: I1205 11:21:11.526817 4796 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 05 11:21:12 crc kubenswrapper[4796]: I1205 11:21:12.546492 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mztpp" event={"ID":"94cfb696-c683-479c-b637-f2d7f99e2ea3","Type":"ContainerStarted","Data":"af11e2971329756429971e7ffdc4b1a51d37a62d6e3697542b48f750fee9b6cf"} Dec 05 11:21:13 crc kubenswrapper[4796]: I1205 11:21:13.559035 4796 generic.go:334] "Generic (PLEG): container finished" podID="94cfb696-c683-479c-b637-f2d7f99e2ea3" containerID="af11e2971329756429971e7ffdc4b1a51d37a62d6e3697542b48f750fee9b6cf" exitCode=0 Dec 05 11:21:13 crc kubenswrapper[4796]: I1205 11:21:13.559123 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mztpp" event={"ID":"94cfb696-c683-479c-b637-f2d7f99e2ea3","Type":"ContainerDied","Data":"af11e2971329756429971e7ffdc4b1a51d37a62d6e3697542b48f750fee9b6cf"} Dec 05 11:21:14 crc kubenswrapper[4796]: I1205 11:21:14.577818 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mztpp" event={"ID":"94cfb696-c683-479c-b637-f2d7f99e2ea3","Type":"ContainerStarted","Data":"51effdca7bfafdf5655ff623cc6c1003e60a880f42d6f21231dfbfbc5c1027a9"} Dec 05 11:21:14 crc kubenswrapper[4796]: I1205 11:21:14.604814 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mztpp" podStartSLOduration=3.07494909 podStartE2EDuration="5.604794933s" podCreationTimestamp="2025-12-05 11:21:09 +0000 UTC" firstStartedPulling="2025-12-05 11:21:11.526562153 +0000 UTC m=+3217.814667665" lastFinishedPulling="2025-12-05 11:21:14.056407994 +0000 UTC m=+3220.344513508" observedRunningTime="2025-12-05 11:21:14.599914507 +0000 UTC m=+3220.888020020" watchObservedRunningTime="2025-12-05 11:21:14.604794933 +0000 UTC m=+3220.892900446" Dec 05 11:21:20 crc kubenswrapper[4796]: I1205 11:21:20.205755 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mztpp" Dec 05 11:21:20 crc kubenswrapper[4796]: I1205 11:21:20.206275 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mztpp" Dec 05 11:21:20 crc kubenswrapper[4796]: I1205 11:21:20.242838 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mztpp" Dec 05 11:21:20 crc kubenswrapper[4796]: I1205 11:21:20.668046 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mztpp" Dec 05 11:21:20 crc kubenswrapper[4796]: I1205 11:21:20.711492 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mztpp"] Dec 05 11:21:21 crc kubenswrapper[4796]: I1205 11:21:21.641303 4796 generic.go:334] "Generic (PLEG): container finished" podID="602e4a00-93f1-44a5-ab35-538b3007ea91" containerID="e17e25442fe6827023f4fa292a16e3d8fe282f75b90ba183f9d0ec2002526505" exitCode=0 Dec 05 11:21:21 crc kubenswrapper[4796]: I1205 11:21:21.641398 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vbfc7/crc-debug-2l4m9" event={"ID":"602e4a00-93f1-44a5-ab35-538b3007ea91","Type":"ContainerDied","Data":"e17e25442fe6827023f4fa292a16e3d8fe282f75b90ba183f9d0ec2002526505"} Dec 05 11:21:22 crc kubenswrapper[4796]: I1205 11:21:22.669884 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mztpp" podUID="94cfb696-c683-479c-b637-f2d7f99e2ea3" containerName="registry-server" containerID="cri-o://51effdca7bfafdf5655ff623cc6c1003e60a880f42d6f21231dfbfbc5c1027a9" gracePeriod=2 Dec 05 11:21:22 crc kubenswrapper[4796]: I1205 11:21:22.750294 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vbfc7/crc-debug-2l4m9" Dec 05 11:21:22 crc kubenswrapper[4796]: I1205 11:21:22.787179 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vbfc7/crc-debug-2l4m9"] Dec 05 11:21:22 crc kubenswrapper[4796]: I1205 11:21:22.795597 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vbfc7/crc-debug-2l4m9"] Dec 05 11:21:22 crc kubenswrapper[4796]: I1205 11:21:22.855467 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw6ff\" (UniqueName: \"kubernetes.io/projected/602e4a00-93f1-44a5-ab35-538b3007ea91-kube-api-access-nw6ff\") pod \"602e4a00-93f1-44a5-ab35-538b3007ea91\" (UID: \"602e4a00-93f1-44a5-ab35-538b3007ea91\") " Dec 05 11:21:22 crc kubenswrapper[4796]: I1205 11:21:22.855598 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/602e4a00-93f1-44a5-ab35-538b3007ea91-host\") pod \"602e4a00-93f1-44a5-ab35-538b3007ea91\" (UID: \"602e4a00-93f1-44a5-ab35-538b3007ea91\") " Dec 05 11:21:22 crc kubenswrapper[4796]: I1205 11:21:22.855709 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/602e4a00-93f1-44a5-ab35-538b3007ea91-host" (OuterVolumeSpecName: "host") pod "602e4a00-93f1-44a5-ab35-538b3007ea91" (UID: "602e4a00-93f1-44a5-ab35-538b3007ea91"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 11:21:22 crc kubenswrapper[4796]: I1205 11:21:22.856435 4796 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/602e4a00-93f1-44a5-ab35-538b3007ea91-host\") on node \"crc\" DevicePath \"\"" Dec 05 11:21:22 crc kubenswrapper[4796]: I1205 11:21:22.862101 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/602e4a00-93f1-44a5-ab35-538b3007ea91-kube-api-access-nw6ff" (OuterVolumeSpecName: "kube-api-access-nw6ff") pod "602e4a00-93f1-44a5-ab35-538b3007ea91" (UID: "602e4a00-93f1-44a5-ab35-538b3007ea91"). InnerVolumeSpecName "kube-api-access-nw6ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:21:22 crc kubenswrapper[4796]: I1205 11:21:22.959131 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw6ff\" (UniqueName: \"kubernetes.io/projected/602e4a00-93f1-44a5-ab35-538b3007ea91-kube-api-access-nw6ff\") on node \"crc\" DevicePath \"\"" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.589593 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mztpp" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.677294 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94cfb696-c683-479c-b637-f2d7f99e2ea3-utilities\") pod \"94cfb696-c683-479c-b637-f2d7f99e2ea3\" (UID: \"94cfb696-c683-479c-b637-f2d7f99e2ea3\") " Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.677796 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txzxd\" (UniqueName: \"kubernetes.io/projected/94cfb696-c683-479c-b637-f2d7f99e2ea3-kube-api-access-txzxd\") pod \"94cfb696-c683-479c-b637-f2d7f99e2ea3\" (UID: \"94cfb696-c683-479c-b637-f2d7f99e2ea3\") " Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.677868 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94cfb696-c683-479c-b637-f2d7f99e2ea3-catalog-content\") pod \"94cfb696-c683-479c-b637-f2d7f99e2ea3\" (UID: \"94cfb696-c683-479c-b637-f2d7f99e2ea3\") " Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.678306 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94cfb696-c683-479c-b637-f2d7f99e2ea3-utilities" (OuterVolumeSpecName: "utilities") pod "94cfb696-c683-479c-b637-f2d7f99e2ea3" (UID: "94cfb696-c683-479c-b637-f2d7f99e2ea3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.678476 4796 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da21c7630de9ce159e129266a1809f1c08a9e5537b296fd2e67d600c7c8d6bf1" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.678544 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vbfc7/crc-debug-2l4m9" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.682429 4796 generic.go:334] "Generic (PLEG): container finished" podID="94cfb696-c683-479c-b637-f2d7f99e2ea3" containerID="51effdca7bfafdf5655ff623cc6c1003e60a880f42d6f21231dfbfbc5c1027a9" exitCode=0 Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.682471 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mztpp" event={"ID":"94cfb696-c683-479c-b637-f2d7f99e2ea3","Type":"ContainerDied","Data":"51effdca7bfafdf5655ff623cc6c1003e60a880f42d6f21231dfbfbc5c1027a9"} Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.682501 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mztpp" event={"ID":"94cfb696-c683-479c-b637-f2d7f99e2ea3","Type":"ContainerDied","Data":"1e475786bfc074ce5ef818fabc350753214be9ea8efab5cd7979c4ec0d8f1e12"} Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.682519 4796 scope.go:117] "RemoveContainer" containerID="51effdca7bfafdf5655ff623cc6c1003e60a880f42d6f21231dfbfbc5c1027a9" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.682609 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94cfb696-c683-479c-b637-f2d7f99e2ea3-kube-api-access-txzxd" (OuterVolumeSpecName: "kube-api-access-txzxd") pod "94cfb696-c683-479c-b637-f2d7f99e2ea3" (UID: "94cfb696-c683-479c-b637-f2d7f99e2ea3"). InnerVolumeSpecName "kube-api-access-txzxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.682640 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mztpp" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.720666 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94cfb696-c683-479c-b637-f2d7f99e2ea3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94cfb696-c683-479c-b637-f2d7f99e2ea3" (UID: "94cfb696-c683-479c-b637-f2d7f99e2ea3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.744806 4796 scope.go:117] "RemoveContainer" containerID="af11e2971329756429971e7ffdc4b1a51d37a62d6e3697542b48f750fee9b6cf" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.778715 4796 scope.go:117] "RemoveContainer" containerID="276d9cc5b0d5cce59cd7c671ab6b5c58842de90e873b4d035f566be9403ca960" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.780528 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94cfb696-c683-479c-b637-f2d7f99e2ea3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.780551 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94cfb696-c683-479c-b637-f2d7f99e2ea3-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.780563 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txzxd\" (UniqueName: \"kubernetes.io/projected/94cfb696-c683-479c-b637-f2d7f99e2ea3-kube-api-access-txzxd\") on node \"crc\" DevicePath \"\"" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.804094 4796 scope.go:117] "RemoveContainer" containerID="51effdca7bfafdf5655ff623cc6c1003e60a880f42d6f21231dfbfbc5c1027a9" Dec 05 11:21:23 crc kubenswrapper[4796]: E1205 11:21:23.804441 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51effdca7bfafdf5655ff623cc6c1003e60a880f42d6f21231dfbfbc5c1027a9\": container with ID starting with 51effdca7bfafdf5655ff623cc6c1003e60a880f42d6f21231dfbfbc5c1027a9 not found: ID does not exist" containerID="51effdca7bfafdf5655ff623cc6c1003e60a880f42d6f21231dfbfbc5c1027a9" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.804471 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51effdca7bfafdf5655ff623cc6c1003e60a880f42d6f21231dfbfbc5c1027a9"} err="failed to get container status \"51effdca7bfafdf5655ff623cc6c1003e60a880f42d6f21231dfbfbc5c1027a9\": rpc error: code = NotFound desc = could not find container \"51effdca7bfafdf5655ff623cc6c1003e60a880f42d6f21231dfbfbc5c1027a9\": container with ID starting with 51effdca7bfafdf5655ff623cc6c1003e60a880f42d6f21231dfbfbc5c1027a9 not found: ID does not exist" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.804494 4796 scope.go:117] "RemoveContainer" containerID="af11e2971329756429971e7ffdc4b1a51d37a62d6e3697542b48f750fee9b6cf" Dec 05 11:21:23 crc kubenswrapper[4796]: E1205 11:21:23.804806 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af11e2971329756429971e7ffdc4b1a51d37a62d6e3697542b48f750fee9b6cf\": container with ID starting with af11e2971329756429971e7ffdc4b1a51d37a62d6e3697542b48f750fee9b6cf not found: ID does not exist" containerID="af11e2971329756429971e7ffdc4b1a51d37a62d6e3697542b48f750fee9b6cf" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.804852 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af11e2971329756429971e7ffdc4b1a51d37a62d6e3697542b48f750fee9b6cf"} err="failed to get container status \"af11e2971329756429971e7ffdc4b1a51d37a62d6e3697542b48f750fee9b6cf\": rpc error: code = NotFound desc = could not find container \"af11e2971329756429971e7ffdc4b1a51d37a62d6e3697542b48f750fee9b6cf\": container with ID starting with af11e2971329756429971e7ffdc4b1a51d37a62d6e3697542b48f750fee9b6cf not found: ID does not exist" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.804881 4796 scope.go:117] "RemoveContainer" containerID="276d9cc5b0d5cce59cd7c671ab6b5c58842de90e873b4d035f566be9403ca960" Dec 05 11:21:23 crc kubenswrapper[4796]: E1205 11:21:23.805161 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"276d9cc5b0d5cce59cd7c671ab6b5c58842de90e873b4d035f566be9403ca960\": container with ID starting with 276d9cc5b0d5cce59cd7c671ab6b5c58842de90e873b4d035f566be9403ca960 not found: ID does not exist" containerID="276d9cc5b0d5cce59cd7c671ab6b5c58842de90e873b4d035f566be9403ca960" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.805216 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"276d9cc5b0d5cce59cd7c671ab6b5c58842de90e873b4d035f566be9403ca960"} err="failed to get container status \"276d9cc5b0d5cce59cd7c671ab6b5c58842de90e873b4d035f566be9403ca960\": rpc error: code = NotFound desc = could not find container \"276d9cc5b0d5cce59cd7c671ab6b5c58842de90e873b4d035f566be9403ca960\": container with ID starting with 276d9cc5b0d5cce59cd7c671ab6b5c58842de90e873b4d035f566be9403ca960 not found: ID does not exist" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.953708 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vbfc7/crc-debug-crvxc"] Dec 05 11:21:23 crc kubenswrapper[4796]: E1205 11:21:23.954072 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94cfb696-c683-479c-b637-f2d7f99e2ea3" containerName="extract-utilities" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.954091 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="94cfb696-c683-479c-b637-f2d7f99e2ea3" containerName="extract-utilities" Dec 05 11:21:23 crc kubenswrapper[4796]: E1205 11:21:23.954104 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602e4a00-93f1-44a5-ab35-538b3007ea91" containerName="container-00" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.954111 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="602e4a00-93f1-44a5-ab35-538b3007ea91" containerName="container-00" Dec 05 11:21:23 crc kubenswrapper[4796]: E1205 11:21:23.954130 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94cfb696-c683-479c-b637-f2d7f99e2ea3" containerName="registry-server" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.954137 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="94cfb696-c683-479c-b637-f2d7f99e2ea3" containerName="registry-server" Dec 05 11:21:23 crc kubenswrapper[4796]: E1205 11:21:23.954146 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94cfb696-c683-479c-b637-f2d7f99e2ea3" containerName="extract-content" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.954151 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="94cfb696-c683-479c-b637-f2d7f99e2ea3" containerName="extract-content" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.954325 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="602e4a00-93f1-44a5-ab35-538b3007ea91" containerName="container-00" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.954348 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="94cfb696-c683-479c-b637-f2d7f99e2ea3" containerName="registry-server" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.954974 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vbfc7/crc-debug-crvxc" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.957883 4796 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vbfc7"/"default-dockercfg-l72rh" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.985118 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/238031be-2985-400f-91e3-1fd3c9c2499f-host\") pod \"crc-debug-crvxc\" (UID: \"238031be-2985-400f-91e3-1fd3c9c2499f\") " pod="openshift-must-gather-vbfc7/crc-debug-crvxc" Dec 05 11:21:23 crc kubenswrapper[4796]: I1205 11:21:23.985318 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrwdz\" (UniqueName: \"kubernetes.io/projected/238031be-2985-400f-91e3-1fd3c9c2499f-kube-api-access-xrwdz\") pod \"crc-debug-crvxc\" (UID: \"238031be-2985-400f-91e3-1fd3c9c2499f\") " pod="openshift-must-gather-vbfc7/crc-debug-crvxc" Dec 05 11:21:24 crc kubenswrapper[4796]: I1205 11:21:24.014884 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mztpp"] Dec 05 11:21:24 crc kubenswrapper[4796]: I1205 11:21:24.020911 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mztpp"] Dec 05 11:21:24 crc kubenswrapper[4796]: I1205 11:21:24.040461 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="602e4a00-93f1-44a5-ab35-538b3007ea91" path="/var/lib/kubelet/pods/602e4a00-93f1-44a5-ab35-538b3007ea91/volumes" Dec 05 11:21:24 crc kubenswrapper[4796]: I1205 11:21:24.041016 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94cfb696-c683-479c-b637-f2d7f99e2ea3" path="/var/lib/kubelet/pods/94cfb696-c683-479c-b637-f2d7f99e2ea3/volumes" Dec 05 11:21:24 crc kubenswrapper[4796]: I1205 11:21:24.087855 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrwdz\" (UniqueName: \"kubernetes.io/projected/238031be-2985-400f-91e3-1fd3c9c2499f-kube-api-access-xrwdz\") pod \"crc-debug-crvxc\" (UID: \"238031be-2985-400f-91e3-1fd3c9c2499f\") " pod="openshift-must-gather-vbfc7/crc-debug-crvxc" Dec 05 11:21:24 crc kubenswrapper[4796]: I1205 11:21:24.088116 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/238031be-2985-400f-91e3-1fd3c9c2499f-host\") pod \"crc-debug-crvxc\" (UID: \"238031be-2985-400f-91e3-1fd3c9c2499f\") " pod="openshift-must-gather-vbfc7/crc-debug-crvxc" Dec 05 11:21:24 crc kubenswrapper[4796]: I1205 11:21:24.088249 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/238031be-2985-400f-91e3-1fd3c9c2499f-host\") pod \"crc-debug-crvxc\" (UID: \"238031be-2985-400f-91e3-1fd3c9c2499f\") " pod="openshift-must-gather-vbfc7/crc-debug-crvxc" Dec 05 11:21:24 crc kubenswrapper[4796]: I1205 11:21:24.116431 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrwdz\" (UniqueName: \"kubernetes.io/projected/238031be-2985-400f-91e3-1fd3c9c2499f-kube-api-access-xrwdz\") pod \"crc-debug-crvxc\" (UID: \"238031be-2985-400f-91e3-1fd3c9c2499f\") " pod="openshift-must-gather-vbfc7/crc-debug-crvxc" Dec 05 11:21:24 crc kubenswrapper[4796]: I1205 11:21:24.270660 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vbfc7/crc-debug-crvxc" Dec 05 11:21:24 crc kubenswrapper[4796]: I1205 11:21:24.708908 4796 generic.go:334] "Generic (PLEG): container finished" podID="238031be-2985-400f-91e3-1fd3c9c2499f" containerID="64044ddc5270dd56239f650ca3d81074f4511b6d8657c64054554c5ac7bf1656" exitCode=0 Dec 05 11:21:24 crc kubenswrapper[4796]: I1205 11:21:24.709016 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vbfc7/crc-debug-crvxc" event={"ID":"238031be-2985-400f-91e3-1fd3c9c2499f","Type":"ContainerDied","Data":"64044ddc5270dd56239f650ca3d81074f4511b6d8657c64054554c5ac7bf1656"} Dec 05 11:21:24 crc kubenswrapper[4796]: I1205 11:21:24.709110 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vbfc7/crc-debug-crvxc" event={"ID":"238031be-2985-400f-91e3-1fd3c9c2499f","Type":"ContainerStarted","Data":"e8e76170b409cbf860bfd8554bbfdf7fb068364e04078f5034e63d0ebfd29342"} Dec 05 11:21:25 crc kubenswrapper[4796]: I1205 11:21:25.179640 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vbfc7/crc-debug-crvxc"] Dec 05 11:21:25 crc kubenswrapper[4796]: I1205 11:21:25.188100 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vbfc7/crc-debug-crvxc"] Dec 05 11:21:25 crc kubenswrapper[4796]: I1205 11:21:25.805008 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vbfc7/crc-debug-crvxc" Dec 05 11:21:25 crc kubenswrapper[4796]: I1205 11:21:25.926509 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/238031be-2985-400f-91e3-1fd3c9c2499f-host\") pod \"238031be-2985-400f-91e3-1fd3c9c2499f\" (UID: \"238031be-2985-400f-91e3-1fd3c9c2499f\") " Dec 05 11:21:25 crc kubenswrapper[4796]: I1205 11:21:25.926646 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrwdz\" (UniqueName: \"kubernetes.io/projected/238031be-2985-400f-91e3-1fd3c9c2499f-kube-api-access-xrwdz\") pod \"238031be-2985-400f-91e3-1fd3c9c2499f\" (UID: \"238031be-2985-400f-91e3-1fd3c9c2499f\") " Dec 05 11:21:25 crc kubenswrapper[4796]: I1205 11:21:25.926652 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/238031be-2985-400f-91e3-1fd3c9c2499f-host" (OuterVolumeSpecName: "host") pod "238031be-2985-400f-91e3-1fd3c9c2499f" (UID: "238031be-2985-400f-91e3-1fd3c9c2499f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 11:21:25 crc kubenswrapper[4796]: I1205 11:21:25.927417 4796 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/238031be-2985-400f-91e3-1fd3c9c2499f-host\") on node \"crc\" DevicePath \"\"" Dec 05 11:21:25 crc kubenswrapper[4796]: I1205 11:21:25.934049 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/238031be-2985-400f-91e3-1fd3c9c2499f-kube-api-access-xrwdz" (OuterVolumeSpecName: "kube-api-access-xrwdz") pod "238031be-2985-400f-91e3-1fd3c9c2499f" (UID: "238031be-2985-400f-91e3-1fd3c9c2499f"). InnerVolumeSpecName "kube-api-access-xrwdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:21:26 crc kubenswrapper[4796]: I1205 11:21:26.029904 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrwdz\" (UniqueName: \"kubernetes.io/projected/238031be-2985-400f-91e3-1fd3c9c2499f-kube-api-access-xrwdz\") on node \"crc\" DevicePath \"\"" Dec 05 11:21:26 crc kubenswrapper[4796]: I1205 11:21:26.060248 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="238031be-2985-400f-91e3-1fd3c9c2499f" path="/var/lib/kubelet/pods/238031be-2985-400f-91e3-1fd3c9c2499f/volumes" Dec 05 11:21:26 crc kubenswrapper[4796]: I1205 11:21:26.415452 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vbfc7/crc-debug-rnqq6"] Dec 05 11:21:26 crc kubenswrapper[4796]: E1205 11:21:26.415922 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="238031be-2985-400f-91e3-1fd3c9c2499f" containerName="container-00" Dec 05 11:21:26 crc kubenswrapper[4796]: I1205 11:21:26.415944 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="238031be-2985-400f-91e3-1fd3c9c2499f" containerName="container-00" Dec 05 11:21:26 crc kubenswrapper[4796]: I1205 11:21:26.416153 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="238031be-2985-400f-91e3-1fd3c9c2499f" containerName="container-00" Dec 05 11:21:26 crc kubenswrapper[4796]: I1205 11:21:26.416798 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vbfc7/crc-debug-rnqq6" Dec 05 11:21:26 crc kubenswrapper[4796]: I1205 11:21:26.439932 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/572f1c25-57a4-4aa6-9069-1765cc7d63cf-host\") pod \"crc-debug-rnqq6\" (UID: \"572f1c25-57a4-4aa6-9069-1765cc7d63cf\") " pod="openshift-must-gather-vbfc7/crc-debug-rnqq6" Dec 05 11:21:26 crc kubenswrapper[4796]: I1205 11:21:26.440246 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcw8b\" (UniqueName: \"kubernetes.io/projected/572f1c25-57a4-4aa6-9069-1765cc7d63cf-kube-api-access-gcw8b\") pod \"crc-debug-rnqq6\" (UID: \"572f1c25-57a4-4aa6-9069-1765cc7d63cf\") " pod="openshift-must-gather-vbfc7/crc-debug-rnqq6" Dec 05 11:21:26 crc kubenswrapper[4796]: I1205 11:21:26.541394 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/572f1c25-57a4-4aa6-9069-1765cc7d63cf-host\") pod \"crc-debug-rnqq6\" (UID: \"572f1c25-57a4-4aa6-9069-1765cc7d63cf\") " pod="openshift-must-gather-vbfc7/crc-debug-rnqq6" Dec 05 11:21:26 crc kubenswrapper[4796]: I1205 11:21:26.541462 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcw8b\" (UniqueName: \"kubernetes.io/projected/572f1c25-57a4-4aa6-9069-1765cc7d63cf-kube-api-access-gcw8b\") pod \"crc-debug-rnqq6\" (UID: \"572f1c25-57a4-4aa6-9069-1765cc7d63cf\") " pod="openshift-must-gather-vbfc7/crc-debug-rnqq6" Dec 05 11:21:26 crc kubenswrapper[4796]: I1205 11:21:26.541583 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/572f1c25-57a4-4aa6-9069-1765cc7d63cf-host\") pod \"crc-debug-rnqq6\" (UID: \"572f1c25-57a4-4aa6-9069-1765cc7d63cf\") " pod="openshift-must-gather-vbfc7/crc-debug-rnqq6" Dec 05 11:21:26 crc kubenswrapper[4796]: I1205 11:21:26.561414 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcw8b\" (UniqueName: \"kubernetes.io/projected/572f1c25-57a4-4aa6-9069-1765cc7d63cf-kube-api-access-gcw8b\") pod \"crc-debug-rnqq6\" (UID: \"572f1c25-57a4-4aa6-9069-1765cc7d63cf\") " pod="openshift-must-gather-vbfc7/crc-debug-rnqq6" Dec 05 11:21:26 crc kubenswrapper[4796]: I1205 11:21:26.731263 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vbfc7/crc-debug-rnqq6" Dec 05 11:21:26 crc kubenswrapper[4796]: I1205 11:21:26.733547 4796 scope.go:117] "RemoveContainer" containerID="64044ddc5270dd56239f650ca3d81074f4511b6d8657c64054554c5ac7bf1656" Dec 05 11:21:26 crc kubenswrapper[4796]: I1205 11:21:26.733583 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vbfc7/crc-debug-crvxc" Dec 05 11:21:26 crc kubenswrapper[4796]: W1205 11:21:26.778830 4796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod572f1c25_57a4_4aa6_9069_1765cc7d63cf.slice/crio-3c6e0fe98de3f3108408ffbba2f26bd10b59d13a5b8e2fa2bc29dfb3c7c9f5c6 WatchSource:0}: Error finding container 3c6e0fe98de3f3108408ffbba2f26bd10b59d13a5b8e2fa2bc29dfb3c7c9f5c6: Status 404 returned error can't find the container with id 3c6e0fe98de3f3108408ffbba2f26bd10b59d13a5b8e2fa2bc29dfb3c7c9f5c6 Dec 05 11:21:27 crc kubenswrapper[4796]: I1205 11:21:27.746484 4796 generic.go:334] "Generic (PLEG): container finished" podID="572f1c25-57a4-4aa6-9069-1765cc7d63cf" containerID="29f59d4d9a5f8c54f4782e0b90de78e3383bd834f9a8d513222376f06876a7c2" exitCode=0 Dec 05 11:21:27 crc kubenswrapper[4796]: I1205 11:21:27.746601 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vbfc7/crc-debug-rnqq6" event={"ID":"572f1c25-57a4-4aa6-9069-1765cc7d63cf","Type":"ContainerDied","Data":"29f59d4d9a5f8c54f4782e0b90de78e3383bd834f9a8d513222376f06876a7c2"} Dec 05 11:21:27 crc kubenswrapper[4796]: I1205 11:21:27.746949 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vbfc7/crc-debug-rnqq6" event={"ID":"572f1c25-57a4-4aa6-9069-1765cc7d63cf","Type":"ContainerStarted","Data":"3c6e0fe98de3f3108408ffbba2f26bd10b59d13a5b8e2fa2bc29dfb3c7c9f5c6"} Dec 05 11:21:27 crc kubenswrapper[4796]: I1205 11:21:27.783952 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vbfc7/crc-debug-rnqq6"] Dec 05 11:21:27 crc kubenswrapper[4796]: I1205 11:21:27.793337 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vbfc7/crc-debug-rnqq6"] Dec 05 11:21:28 crc kubenswrapper[4796]: I1205 11:21:28.836843 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vbfc7/crc-debug-rnqq6" Dec 05 11:21:28 crc kubenswrapper[4796]: I1205 11:21:28.887131 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/572f1c25-57a4-4aa6-9069-1765cc7d63cf-host\") pod \"572f1c25-57a4-4aa6-9069-1765cc7d63cf\" (UID: \"572f1c25-57a4-4aa6-9069-1765cc7d63cf\") " Dec 05 11:21:28 crc kubenswrapper[4796]: I1205 11:21:28.887252 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/572f1c25-57a4-4aa6-9069-1765cc7d63cf-host" (OuterVolumeSpecName: "host") pod "572f1c25-57a4-4aa6-9069-1765cc7d63cf" (UID: "572f1c25-57a4-4aa6-9069-1765cc7d63cf"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 05 11:21:28 crc kubenswrapper[4796]: I1205 11:21:28.887302 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcw8b\" (UniqueName: \"kubernetes.io/projected/572f1c25-57a4-4aa6-9069-1765cc7d63cf-kube-api-access-gcw8b\") pod \"572f1c25-57a4-4aa6-9069-1765cc7d63cf\" (UID: \"572f1c25-57a4-4aa6-9069-1765cc7d63cf\") " Dec 05 11:21:28 crc kubenswrapper[4796]: I1205 11:21:28.887799 4796 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/572f1c25-57a4-4aa6-9069-1765cc7d63cf-host\") on node \"crc\" DevicePath \"\"" Dec 05 11:21:28 crc kubenswrapper[4796]: I1205 11:21:28.892544 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/572f1c25-57a4-4aa6-9069-1765cc7d63cf-kube-api-access-gcw8b" (OuterVolumeSpecName: "kube-api-access-gcw8b") pod "572f1c25-57a4-4aa6-9069-1765cc7d63cf" (UID: "572f1c25-57a4-4aa6-9069-1765cc7d63cf"). InnerVolumeSpecName "kube-api-access-gcw8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:21:28 crc kubenswrapper[4796]: I1205 11:21:28.989955 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcw8b\" (UniqueName: \"kubernetes.io/projected/572f1c25-57a4-4aa6-9069-1765cc7d63cf-kube-api-access-gcw8b\") on node \"crc\" DevicePath \"\"" Dec 05 11:21:29 crc kubenswrapper[4796]: I1205 11:21:29.769611 4796 scope.go:117] "RemoveContainer" containerID="29f59d4d9a5f8c54f4782e0b90de78e3383bd834f9a8d513222376f06876a7c2" Dec 05 11:21:29 crc kubenswrapper[4796]: I1205 11:21:29.769631 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vbfc7/crc-debug-rnqq6" Dec 05 11:21:30 crc kubenswrapper[4796]: I1205 11:21:30.043126 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="572f1c25-57a4-4aa6-9069-1765cc7d63cf" path="/var/lib/kubelet/pods/572f1c25-57a4-4aa6-9069-1765cc7d63cf/volumes" Dec 05 11:21:49 crc kubenswrapper[4796]: I1205 11:21:49.144143 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-56d7c49fdd-qssn9_1f02773b-d7af-447e-ab61-e59b12b5b138/barbican-api/0.log" Dec 05 11:21:49 crc kubenswrapper[4796]: I1205 11:21:49.214338 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-56d7c49fdd-qssn9_1f02773b-d7af-447e-ab61-e59b12b5b138/barbican-api-log/0.log" Dec 05 11:21:49 crc kubenswrapper[4796]: I1205 11:21:49.324712 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-b4d7f4754-f9kqr_6edb9afc-40e5-4a55-bf3e-b77c4fe4951b/barbican-keystone-listener/0.log" Dec 05 11:21:49 crc kubenswrapper[4796]: I1205 11:21:49.355588 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-b4d7f4754-f9kqr_6edb9afc-40e5-4a55-bf3e-b77c4fe4951b/barbican-keystone-listener-log/0.log" Dec 05 11:21:49 crc kubenswrapper[4796]: I1205 11:21:49.484526 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9b577c7dc-s8nmt_116dc4c5-e13e-494d-8909-3a3e23c45ec1/barbican-worker/0.log" Dec 05 11:21:49 crc kubenswrapper[4796]: I1205 11:21:49.500050 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9b577c7dc-s8nmt_116dc4c5-e13e-494d-8909-3a3e23c45ec1/barbican-worker-log/0.log" Dec 05 11:21:49 crc kubenswrapper[4796]: I1205 11:21:49.580601 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-wkz4c_9ce2ee96-991c-49bc-b64d-1dee82bc425a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:21:49 crc kubenswrapper[4796]: I1205 11:21:49.687642 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ea696700-f56d-4ca9-a810-410a2061a80e/ceilometer-central-agent/0.log" Dec 05 11:21:49 crc kubenswrapper[4796]: I1205 11:21:49.752276 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ea696700-f56d-4ca9-a810-410a2061a80e/ceilometer-notification-agent/0.log" Dec 05 11:21:49 crc kubenswrapper[4796]: I1205 11:21:49.769971 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ea696700-f56d-4ca9-a810-410a2061a80e/proxy-httpd/0.log" Dec 05 11:21:49 crc kubenswrapper[4796]: I1205 11:21:49.837732 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ea696700-f56d-4ca9-a810-410a2061a80e/sg-core/0.log" Dec 05 11:21:49 crc kubenswrapper[4796]: I1205 11:21:49.938830 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8ffca929-12a6-40b2-96ee-ff84ea1818dc/cinder-api-log/0.log" Dec 05 11:21:49 crc kubenswrapper[4796]: I1205 11:21:49.958424 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8ffca929-12a6-40b2-96ee-ff84ea1818dc/cinder-api/0.log" Dec 05 11:21:50 crc kubenswrapper[4796]: I1205 11:21:50.152962 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c53203c0-53e8-4b2b-90d3-a9833bd9e7f2/cinder-scheduler/0.log" Dec 05 11:21:50 crc kubenswrapper[4796]: I1205 11:21:50.164014 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_c53203c0-53e8-4b2b-90d3-a9833bd9e7f2/probe/0.log" Dec 05 11:21:50 crc kubenswrapper[4796]: I1205 11:21:50.252612 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-hxh2r_4856a801-fa7d-4150-b557-1b1a0066ce78/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:21:50 crc kubenswrapper[4796]: I1205 11:21:50.321382 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-6mflz_b4f493f2-c177-4784-8c6c-07c52336c07a/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:21:50 crc kubenswrapper[4796]: I1205 11:21:50.454898 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6574f55bb5-jfc7f_0dc9fea5-76b4-465f-9a96-a198004f4c2c/init/0.log" Dec 05 11:21:50 crc kubenswrapper[4796]: I1205 11:21:50.568882 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6574f55bb5-jfc7f_0dc9fea5-76b4-465f-9a96-a198004f4c2c/init/0.log" Dec 05 11:21:50 crc kubenswrapper[4796]: I1205 11:21:50.619317 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6574f55bb5-jfc7f_0dc9fea5-76b4-465f-9a96-a198004f4c2c/dnsmasq-dns/0.log" Dec 05 11:21:50 crc kubenswrapper[4796]: I1205 11:21:50.626574 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-qb7fz_a1045534-e8dd-4d18-a198-d50d1af5d79b/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:21:50 crc kubenswrapper[4796]: I1205 11:21:50.769668 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f/glance-httpd/0.log" Dec 05 11:21:50 crc kubenswrapper[4796]: I1205 11:21:50.815844 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c7d7e6e5-5aa8-44d3-a55f-0e15e7ae447f/glance-log/0.log" Dec 05 11:21:50 crc kubenswrapper[4796]: I1205 11:21:50.949991 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3e544456-c953-47f0-b274-0fc5d07483ce/glance-httpd/0.log" Dec 05 11:21:50 crc kubenswrapper[4796]: I1205 11:21:50.991569 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_3e544456-c953-47f0-b274-0fc5d07483ce/glance-log/0.log" Dec 05 11:21:51 crc kubenswrapper[4796]: I1205 11:21:51.088781 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-78ddb58-f7j44_05623376-2343-40fb-a4df-508ce1e333e2/horizon/0.log" Dec 05 11:21:51 crc kubenswrapper[4796]: I1205 11:21:51.289502 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-g2ksp_c5d11e0e-4240-4769-8a8a-945f78970a6c/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:21:51 crc kubenswrapper[4796]: I1205 11:21:51.377649 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-78ddb58-f7j44_05623376-2343-40fb-a4df-508ce1e333e2/horizon-log/0.log" Dec 05 11:21:51 crc kubenswrapper[4796]: I1205 11:21:51.445968 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-ktktx_61349c4c-5e04-4781-bbd0-1e6930083dd1/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:21:51 crc kubenswrapper[4796]: I1205 11:21:51.635620 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5dbb66bc57-9lclx_e6b0a08e-b4c3-4947-9c4e-67a863d92dca/keystone-api/0.log" Dec 05 11:21:51 crc kubenswrapper[4796]: I1205 11:21:51.646234 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29415541-7j2vs_8745cb1c-046c-423c-ada4-99fac12690eb/keystone-cron/0.log" Dec 05 11:21:51 crc kubenswrapper[4796]: I1205 11:21:51.745171 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_0e251696-adfe-46cb-87c3-651b3e038af2/kube-state-metrics/0.log" Dec 05 11:21:51 crc kubenswrapper[4796]: I1205 11:21:51.842845 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-8f2x8_0aba1742-7328-48a0-b9f5-af4c66636de3/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:21:52 crc kubenswrapper[4796]: I1205 11:21:52.145526 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-fb965878c-qncj9_22c4293b-736f-4c5a-b47d-6a0a870bf1da/neutron-httpd/0.log" Dec 05 11:21:52 crc kubenswrapper[4796]: I1205 11:21:52.199955 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-fb965878c-qncj9_22c4293b-736f-4c5a-b47d-6a0a870bf1da/neutron-api/0.log" Dec 05 11:21:52 crc kubenswrapper[4796]: I1205 11:21:52.225729 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-5nslk_692ab668-84d9-4673-8601-c09b4025b5fe/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:21:52 crc kubenswrapper[4796]: I1205 11:21:52.639502 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f0b532d4-1ab1-4d26-ac86-269a32a1bade/nova-api-log/0.log" Dec 05 11:21:52 crc kubenswrapper[4796]: I1205 11:21:52.774369 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_b1096b18-a70c-4076-ac0c-0a57532ec40e/nova-cell0-conductor-conductor/0.log" Dec 05 11:21:53 crc kubenswrapper[4796]: I1205 11:21:53.083194 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_70590d4b-3f25-4359-8a2d-984d9d98a9ed/nova-cell1-conductor-conductor/0.log" Dec 05 11:21:53 crc kubenswrapper[4796]: I1205 11:21:53.142812 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f0b532d4-1ab1-4d26-ac86-269a32a1bade/nova-api-api/0.log" Dec 05 11:21:53 crc kubenswrapper[4796]: I1205 11:21:53.166465 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6d012a13-a56f-40e9-9689-43405f7c5cfd/nova-cell1-novncproxy-novncproxy/0.log" Dec 05 11:21:53 crc kubenswrapper[4796]: I1205 11:21:53.353653 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-47tqx_43ae5283-caa9-4308-b825-3c937081341c/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:21:53 crc kubenswrapper[4796]: I1205 11:21:53.517294 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76/nova-metadata-log/0.log" Dec 05 11:21:53 crc kubenswrapper[4796]: I1205 11:21:53.741184 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e1935b08-3dfe-4990-9296-634f2dde999f/nova-scheduler-scheduler/0.log" Dec 05 11:21:53 crc kubenswrapper[4796]: I1205 11:21:53.818626 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0deffc65-bff4-419f-aa12-2c17432112a3/mysql-bootstrap/0.log" Dec 05 11:21:53 crc kubenswrapper[4796]: I1205 11:21:53.991333 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0deffc65-bff4-419f-aa12-2c17432112a3/mysql-bootstrap/0.log" Dec 05 11:21:54 crc kubenswrapper[4796]: I1205 11:21:54.020208 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0deffc65-bff4-419f-aa12-2c17432112a3/galera/0.log" Dec 05 11:21:54 crc kubenswrapper[4796]: I1205 11:21:54.177343 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e649ad8d-0ed0-495c-9abc-7220d750f060/mysql-bootstrap/0.log" Dec 05 11:21:54 crc kubenswrapper[4796]: I1205 11:21:54.361368 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e649ad8d-0ed0-495c-9abc-7220d750f060/mysql-bootstrap/0.log" Dec 05 11:21:54 crc kubenswrapper[4796]: I1205 11:21:54.387992 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_e649ad8d-0ed0-495c-9abc-7220d750f060/galera/0.log" Dec 05 11:21:54 crc kubenswrapper[4796]: I1205 11:21:54.497287 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d97dc0a2-fb69-4ad7-8817-e65ab5b6cf76/nova-metadata-metadata/0.log" Dec 05 11:21:54 crc kubenswrapper[4796]: I1205 11:21:54.596209 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_127c1064-4744-43ab-afb3-91c03cee795d/openstackclient/0.log" Dec 05 11:21:54 crc kubenswrapper[4796]: I1205 11:21:54.636648 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-thfdb_dbbb0427-a102-4a95-a44e-d809d4334090/openstack-network-exporter/0.log" Dec 05 11:21:54 crc kubenswrapper[4796]: I1205 11:21:54.808454 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-z7p8z_f90214e2-6d6d-42b7-8a46-0fb779d31cba/ovsdb-server-init/0.log" Dec 05 11:21:55 crc kubenswrapper[4796]: I1205 11:21:55.024514 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-z7p8z_f90214e2-6d6d-42b7-8a46-0fb779d31cba/ovsdb-server-init/0.log" Dec 05 11:21:55 crc kubenswrapper[4796]: I1205 11:21:55.032706 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-z7p8z_f90214e2-6d6d-42b7-8a46-0fb779d31cba/ovsdb-server/0.log" Dec 05 11:21:55 crc kubenswrapper[4796]: I1205 11:21:55.054273 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-z7p8z_f90214e2-6d6d-42b7-8a46-0fb779d31cba/ovs-vswitchd/0.log" Dec 05 11:21:55 crc kubenswrapper[4796]: I1205 11:21:55.191489 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-vb5l5_2f5f2848-5f90-4a9f-a6f0-b6e83b586402/ovn-controller/0.log" Dec 05 11:21:55 crc kubenswrapper[4796]: I1205 11:21:55.312335 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-wbr88_313cdd2e-bea3-40ac-aee2-b0452b059735/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:21:55 crc kubenswrapper[4796]: I1205 11:21:55.387903 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_75432bf8-6355-495f-aa1d-94928e9b15ba/openstack-network-exporter/0.log" Dec 05 11:21:55 crc kubenswrapper[4796]: I1205 11:21:55.466344 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_75432bf8-6355-495f-aa1d-94928e9b15ba/ovn-northd/0.log" Dec 05 11:21:55 crc kubenswrapper[4796]: I1205 11:21:55.597424 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7c35af0e-1df4-4529-a60f-3be3faaf8ec2/openstack-network-exporter/0.log" Dec 05 11:21:55 crc kubenswrapper[4796]: I1205 11:21:55.618375 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7c35af0e-1df4-4529-a60f-3be3faaf8ec2/ovsdbserver-nb/0.log" Dec 05 11:21:55 crc kubenswrapper[4796]: I1205 11:21:55.726329 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_929f06e1-44b6-4ce2-9391-4d41a94538fb/openstack-network-exporter/0.log" Dec 05 11:21:55 crc kubenswrapper[4796]: I1205 11:21:55.815297 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_929f06e1-44b6-4ce2-9391-4d41a94538fb/ovsdbserver-sb/0.log" Dec 05 11:21:55 crc kubenswrapper[4796]: I1205 11:21:55.902814 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-644c454648-8vjkb_8f219a85-81d2-4337-bed6-507debdb79dd/placement-api/0.log" Dec 05 11:21:55 crc kubenswrapper[4796]: I1205 11:21:55.997139 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-644c454648-8vjkb_8f219a85-81d2-4337-bed6-507debdb79dd/placement-log/0.log" Dec 05 11:21:56 crc kubenswrapper[4796]: I1205 11:21:56.043269 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d179e637-ffa5-41af-9038-6728586665a6/setup-container/0.log" Dec 05 11:21:56 crc kubenswrapper[4796]: I1205 11:21:56.263887 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d179e637-ffa5-41af-9038-6728586665a6/setup-container/0.log" Dec 05 11:21:56 crc kubenswrapper[4796]: I1205 11:21:56.303526 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_76ca18a4-f216-4325-b15a-adda1d95dddd/setup-container/0.log" Dec 05 11:21:56 crc kubenswrapper[4796]: I1205 11:21:56.314472 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_d179e637-ffa5-41af-9038-6728586665a6/rabbitmq/0.log" Dec 05 11:21:56 crc kubenswrapper[4796]: I1205 11:21:56.565125 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_76ca18a4-f216-4325-b15a-adda1d95dddd/setup-container/0.log" Dec 05 11:21:56 crc kubenswrapper[4796]: I1205 11:21:56.569818 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-kp5pr_b3328299-a078-40d9-90fd-94a0b4145ae5/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:21:56 crc kubenswrapper[4796]: I1205 11:21:56.602168 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_76ca18a4-f216-4325-b15a-adda1d95dddd/rabbitmq/0.log" Dec 05 11:21:56 crc kubenswrapper[4796]: I1205 11:21:56.719558 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-rqd97_c43b4994-6331-4ed4-9180-1b32253929cf/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:21:56 crc kubenswrapper[4796]: I1205 11:21:56.798502 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-lf7tc_224a1954-bad6-417b-8942-de297ca3195c/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:21:56 crc kubenswrapper[4796]: I1205 11:21:56.947638 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-xssb4_ea2e27ab-6908-4bdf-be2d-ad81a1a5af5f/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:21:57 crc kubenswrapper[4796]: I1205 11:21:57.180724 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-t9tj8_9f94d556-2c7c-42db-8c80-521d055ccc68/ssh-known-hosts-edpm-deployment/0.log" Dec 05 11:21:57 crc kubenswrapper[4796]: I1205 11:21:57.425403 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-799657985-knzrm_9e4aeaf3-d2d1-43ab-8594-d293d8602be5/proxy-server/0.log" Dec 05 11:21:57 crc kubenswrapper[4796]: I1205 11:21:57.460847 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-799657985-knzrm_9e4aeaf3-d2d1-43ab-8594-d293d8602be5/proxy-httpd/0.log" Dec 05 11:21:57 crc kubenswrapper[4796]: I1205 11:21:57.495635 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-42p7z_5bfeb9c9-1808-4f43-b61b-4fafe36cda09/swift-ring-rebalance/0.log" Dec 05 11:21:57 crc kubenswrapper[4796]: I1205 11:21:57.605344 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/account-auditor/0.log" Dec 05 11:21:57 crc kubenswrapper[4796]: I1205 11:21:57.663662 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/account-reaper/0.log" Dec 05 11:21:57 crc kubenswrapper[4796]: I1205 11:21:57.697563 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/account-replicator/0.log" Dec 05 11:21:57 crc kubenswrapper[4796]: I1205 11:21:57.821834 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/account-server/0.log" Dec 05 11:21:57 crc kubenswrapper[4796]: I1205 11:21:57.832148 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/container-replicator/0.log" Dec 05 11:21:57 crc kubenswrapper[4796]: I1205 11:21:57.836069 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/container-auditor/0.log" Dec 05 11:21:57 crc kubenswrapper[4796]: I1205 11:21:57.893822 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/container-server/0.log" Dec 05 11:21:58 crc kubenswrapper[4796]: I1205 11:21:58.020138 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/container-updater/0.log" Dec 05 11:21:58 crc kubenswrapper[4796]: I1205 11:21:58.022160 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/object-expirer/0.log" Dec 05 11:21:58 crc kubenswrapper[4796]: I1205 11:21:58.065363 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/object-auditor/0.log" Dec 05 11:21:58 crc kubenswrapper[4796]: I1205 11:21:58.094321 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/object-replicator/0.log" Dec 05 11:21:58 crc kubenswrapper[4796]: I1205 11:21:58.256176 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/object-server/0.log" Dec 05 11:21:58 crc kubenswrapper[4796]: I1205 11:21:58.268930 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/object-updater/0.log" Dec 05 11:21:58 crc kubenswrapper[4796]: I1205 11:21:58.283178 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/swift-recon-cron/0.log" Dec 05 11:21:58 crc kubenswrapper[4796]: I1205 11:21:58.283191 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ff8476eb-20ea-41dc-97e0-d08619e42a30/rsync/0.log" Dec 05 11:21:58 crc kubenswrapper[4796]: I1205 11:21:58.482330 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_b3a6fb02-a525-41cb-96f5-ad01c2999e4d/tempest-tests-tempest-tests-runner/0.log" Dec 05 11:21:58 crc kubenswrapper[4796]: I1205 11:21:58.506327 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-kcnz4_91c465f7-7f18-43b4-9b15-d24ed713432f/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:21:58 crc kubenswrapper[4796]: I1205 11:21:58.693498 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_8d8df5e9-4f31-4f63-ba36-276c43b02b75/test-operator-logs-container/0.log" Dec 05 11:21:58 crc kubenswrapper[4796]: I1205 11:21:58.781544 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-dq9z6_589c89c5-f3fd-44c9-ab63-8b1e4774d28f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 05 11:22:09 crc kubenswrapper[4796]: I1205 11:22:09.527519 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_c41250df-9d98-441a-a88f-6a17034b8d31/memcached/0.log" Dec 05 11:22:15 crc kubenswrapper[4796]: I1205 11:22:15.951451 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mx8k9"] Dec 05 11:22:15 crc kubenswrapper[4796]: E1205 11:22:15.952333 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="572f1c25-57a4-4aa6-9069-1765cc7d63cf" containerName="container-00" Dec 05 11:22:15 crc kubenswrapper[4796]: I1205 11:22:15.952348 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="572f1c25-57a4-4aa6-9069-1765cc7d63cf" containerName="container-00" Dec 05 11:22:15 crc kubenswrapper[4796]: I1205 11:22:15.952540 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="572f1c25-57a4-4aa6-9069-1765cc7d63cf" containerName="container-00" Dec 05 11:22:15 crc kubenswrapper[4796]: I1205 11:22:15.953916 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mx8k9" Dec 05 11:22:16 crc kubenswrapper[4796]: I1205 11:22:16.040453 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mx8k9"] Dec 05 11:22:16 crc kubenswrapper[4796]: I1205 11:22:16.059055 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d1c307d-f5ce-4860-84a9-2f50384c2455-catalog-content\") pod \"redhat-marketplace-mx8k9\" (UID: \"9d1c307d-f5ce-4860-84a9-2f50384c2455\") " pod="openshift-marketplace/redhat-marketplace-mx8k9" Dec 05 11:22:16 crc kubenswrapper[4796]: I1205 11:22:16.059220 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d1c307d-f5ce-4860-84a9-2f50384c2455-utilities\") pod \"redhat-marketplace-mx8k9\" (UID: \"9d1c307d-f5ce-4860-84a9-2f50384c2455\") " pod="openshift-marketplace/redhat-marketplace-mx8k9" Dec 05 11:22:16 crc kubenswrapper[4796]: I1205 11:22:16.059487 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq8f6\" (UniqueName: \"kubernetes.io/projected/9d1c307d-f5ce-4860-84a9-2f50384c2455-kube-api-access-vq8f6\") pod \"redhat-marketplace-mx8k9\" (UID: \"9d1c307d-f5ce-4860-84a9-2f50384c2455\") " pod="openshift-marketplace/redhat-marketplace-mx8k9" Dec 05 11:22:16 crc kubenswrapper[4796]: I1205 11:22:16.161483 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d1c307d-f5ce-4860-84a9-2f50384c2455-catalog-content\") pod \"redhat-marketplace-mx8k9\" (UID: \"9d1c307d-f5ce-4860-84a9-2f50384c2455\") " pod="openshift-marketplace/redhat-marketplace-mx8k9" Dec 05 11:22:16 crc kubenswrapper[4796]: I1205 11:22:16.161567 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d1c307d-f5ce-4860-84a9-2f50384c2455-utilities\") pod \"redhat-marketplace-mx8k9\" (UID: \"9d1c307d-f5ce-4860-84a9-2f50384c2455\") " pod="openshift-marketplace/redhat-marketplace-mx8k9" Dec 05 11:22:16 crc kubenswrapper[4796]: I1205 11:22:16.161656 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq8f6\" (UniqueName: \"kubernetes.io/projected/9d1c307d-f5ce-4860-84a9-2f50384c2455-kube-api-access-vq8f6\") pod \"redhat-marketplace-mx8k9\" (UID: \"9d1c307d-f5ce-4860-84a9-2f50384c2455\") " pod="openshift-marketplace/redhat-marketplace-mx8k9" Dec 05 11:22:16 crc kubenswrapper[4796]: I1205 11:22:16.162355 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d1c307d-f5ce-4860-84a9-2f50384c2455-catalog-content\") pod \"redhat-marketplace-mx8k9\" (UID: \"9d1c307d-f5ce-4860-84a9-2f50384c2455\") " pod="openshift-marketplace/redhat-marketplace-mx8k9" Dec 05 11:22:16 crc kubenswrapper[4796]: I1205 11:22:16.162555 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d1c307d-f5ce-4860-84a9-2f50384c2455-utilities\") pod \"redhat-marketplace-mx8k9\" (UID: \"9d1c307d-f5ce-4860-84a9-2f50384c2455\") " pod="openshift-marketplace/redhat-marketplace-mx8k9" Dec 05 11:22:16 crc kubenswrapper[4796]: I1205 11:22:16.184232 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq8f6\" (UniqueName: \"kubernetes.io/projected/9d1c307d-f5ce-4860-84a9-2f50384c2455-kube-api-access-vq8f6\") pod \"redhat-marketplace-mx8k9\" (UID: \"9d1c307d-f5ce-4860-84a9-2f50384c2455\") " pod="openshift-marketplace/redhat-marketplace-mx8k9" Dec 05 11:22:16 crc kubenswrapper[4796]: I1205 11:22:16.270209 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mx8k9" Dec 05 11:22:16 crc kubenswrapper[4796]: I1205 11:22:16.715581 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mx8k9"] Dec 05 11:22:17 crc kubenswrapper[4796]: I1205 11:22:17.188526 4796 generic.go:334] "Generic (PLEG): container finished" podID="9d1c307d-f5ce-4860-84a9-2f50384c2455" containerID="28ec300ca1020ee4c8b399bcf3fb75ee77fc3b2f2382da1b2eca02682857c892" exitCode=0 Dec 05 11:22:17 crc kubenswrapper[4796]: I1205 11:22:17.188639 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mx8k9" event={"ID":"9d1c307d-f5ce-4860-84a9-2f50384c2455","Type":"ContainerDied","Data":"28ec300ca1020ee4c8b399bcf3fb75ee77fc3b2f2382da1b2eca02682857c892"} Dec 05 11:22:17 crc kubenswrapper[4796]: I1205 11:22:17.188929 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mx8k9" event={"ID":"9d1c307d-f5ce-4860-84a9-2f50384c2455","Type":"ContainerStarted","Data":"878a2bdd6555a1082b280103d48239a76eab80a0f2fa92cee49db1195edf6e79"} Dec 05 11:22:18 crc kubenswrapper[4796]: I1205 11:22:18.201762 4796 generic.go:334] "Generic (PLEG): container finished" podID="9d1c307d-f5ce-4860-84a9-2f50384c2455" containerID="a03d3599660a349b19822089e9303f63c3ce8be6030670547c1f1fe661effa06" exitCode=0 Dec 05 11:22:18 crc kubenswrapper[4796]: I1205 11:22:18.201870 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mx8k9" event={"ID":"9d1c307d-f5ce-4860-84a9-2f50384c2455","Type":"ContainerDied","Data":"a03d3599660a349b19822089e9303f63c3ce8be6030670547c1f1fe661effa06"} Dec 05 11:22:19 crc kubenswrapper[4796]: I1205 11:22:19.215949 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mx8k9" event={"ID":"9d1c307d-f5ce-4860-84a9-2f50384c2455","Type":"ContainerStarted","Data":"062d7abc5cd68eb1469b1d2aab91de14415a53c2748a4ed68155e73dc549f225"} Dec 05 11:22:19 crc kubenswrapper[4796]: I1205 11:22:19.241397 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mx8k9" podStartSLOduration=2.755394131 podStartE2EDuration="4.241376606s" podCreationTimestamp="2025-12-05 11:22:15 +0000 UTC" firstStartedPulling="2025-12-05 11:22:17.190440344 +0000 UTC m=+3283.478545857" lastFinishedPulling="2025-12-05 11:22:18.676422819 +0000 UTC m=+3284.964528332" observedRunningTime="2025-12-05 11:22:19.234858861 +0000 UTC m=+3285.522964375" watchObservedRunningTime="2025-12-05 11:22:19.241376606 +0000 UTC m=+3285.529482118" Dec 05 11:22:21 crc kubenswrapper[4796]: I1205 11:22:21.532180 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb_ce917c34-c2b6-4c47-ac86-8f9bd3e903a2/util/0.log" Dec 05 11:22:21 crc kubenswrapper[4796]: I1205 11:22:21.651921 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb_ce917c34-c2b6-4c47-ac86-8f9bd3e903a2/util/0.log" Dec 05 11:22:21 crc kubenswrapper[4796]: I1205 11:22:21.667406 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb_ce917c34-c2b6-4c47-ac86-8f9bd3e903a2/pull/0.log" Dec 05 11:22:21 crc kubenswrapper[4796]: I1205 11:22:21.740515 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb_ce917c34-c2b6-4c47-ac86-8f9bd3e903a2/pull/0.log" Dec 05 11:22:21 crc kubenswrapper[4796]: I1205 11:22:21.831289 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb_ce917c34-c2b6-4c47-ac86-8f9bd3e903a2/util/0.log" Dec 05 11:22:21 crc kubenswrapper[4796]: I1205 11:22:21.858027 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb_ce917c34-c2b6-4c47-ac86-8f9bd3e903a2/pull/0.log" Dec 05 11:22:21 crc kubenswrapper[4796]: I1205 11:22:21.882149 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1c40d14f1ae9a05677db390e77ae0714427562b819eeb1822e0e1e9a74vwkzb_ce917c34-c2b6-4c47-ac86-8f9bd3e903a2/extract/0.log" Dec 05 11:22:22 crc kubenswrapper[4796]: I1205 11:22:22.020237 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-748df9766b-sv8rl_1fe815d0-1127-44a3-8d89-9964b3b5bbc2/kube-rbac-proxy/0.log" Dec 05 11:22:22 crc kubenswrapper[4796]: I1205 11:22:22.085578 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-748df9766b-sv8rl_1fe815d0-1127-44a3-8d89-9964b3b5bbc2/manager/0.log" Dec 05 11:22:22 crc kubenswrapper[4796]: I1205 11:22:22.093339 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b45f74f94-l9pgt_ecfc5e0d-8538-497e-b578-0ef75e0031db/kube-rbac-proxy/0.log" Dec 05 11:22:22 crc kubenswrapper[4796]: I1205 11:22:22.215789 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b45f74f94-l9pgt_ecfc5e0d-8538-497e-b578-0ef75e0031db/manager/0.log" Dec 05 11:22:22 crc kubenswrapper[4796]: I1205 11:22:22.251198 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-5644f4c99-b5lst_42939bc6-488c-401e-a313-3b5cc9e75f3b/kube-rbac-proxy/0.log" Dec 05 11:22:22 crc kubenswrapper[4796]: I1205 11:22:22.320671 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-5644f4c99-b5lst_42939bc6-488c-401e-a313-3b5cc9e75f3b/manager/0.log" Dec 05 11:22:22 crc kubenswrapper[4796]: I1205 11:22:22.418202 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6f75fb6b58-gz4gq_3d773ce3-0d67-4965-b84e-86f922daad38/kube-rbac-proxy/0.log" Dec 05 11:22:22 crc kubenswrapper[4796]: I1205 11:22:22.541412 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6f75fb6b58-gz4gq_3d773ce3-0d67-4965-b84e-86f922daad38/manager/0.log" Dec 05 11:22:22 crc kubenswrapper[4796]: I1205 11:22:22.593176 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-db55fc494-vtkgg_09474b29-37f5-4e66-9314-6af690b94758/kube-rbac-proxy/0.log" Dec 05 11:22:22 crc kubenswrapper[4796]: I1205 11:22:22.626228 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-db55fc494-vtkgg_09474b29-37f5-4e66-9314-6af690b94758/manager/0.log" Dec 05 11:22:22 crc kubenswrapper[4796]: I1205 11:22:22.734395 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-86b7548d4c-d59d5_95efa747-ba05-4c2f-86a8-037452c66764/kube-rbac-proxy/0.log" Dec 05 11:22:22 crc kubenswrapper[4796]: I1205 11:22:22.787321 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-86b7548d4c-d59d5_95efa747-ba05-4c2f-86a8-037452c66764/manager/0.log" Dec 05 11:22:22 crc kubenswrapper[4796]: I1205 11:22:22.865674 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-64989647d4-6pkqv_1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96/kube-rbac-proxy/0.log" Dec 05 11:22:23 crc kubenswrapper[4796]: I1205 11:22:23.005757 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-64989647d4-6pkqv_1b8f7cce-f7de-4fc0-a672-3ce06b1bdf96/manager/0.log" Dec 05 11:22:23 crc kubenswrapper[4796]: I1205 11:22:23.197159 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-7c55bc5499-tx2js_98b514bf-2dd0-4d60-9141-d70dead159cb/kube-rbac-proxy/0.log" Dec 05 11:22:23 crc kubenswrapper[4796]: I1205 11:22:23.280151 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-7c55bc5499-tx2js_98b514bf-2dd0-4d60-9141-d70dead159cb/manager/0.log" Dec 05 11:22:23 crc kubenswrapper[4796]: I1205 11:22:23.379355 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-847b767f55-wqhnd_8e886ef4-4f20-49e6-93d8-d011ac192923/kube-rbac-proxy/0.log" Dec 05 11:22:23 crc kubenswrapper[4796]: I1205 11:22:23.449389 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-847b767f55-wqhnd_8e886ef4-4f20-49e6-93d8-d011ac192923/manager/0.log" Dec 05 11:22:23 crc kubenswrapper[4796]: I1205 11:22:23.527781 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59c7d85948-v5lcv_ba227292-a494-43c0-9fbd-addbd8f48b6f/kube-rbac-proxy/0.log" Dec 05 11:22:23 crc kubenswrapper[4796]: I1205 11:22:23.537152 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-59c7d85948-v5lcv_ba227292-a494-43c0-9fbd-addbd8f48b6f/manager/0.log" Dec 05 11:22:23 crc kubenswrapper[4796]: I1205 11:22:23.629903 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66b4f6f898-cqrd7_f92cf54c-1bcd-4a73-86b2-e4407908953d/kube-rbac-proxy/0.log" Dec 05 11:22:23 crc kubenswrapper[4796]: I1205 11:22:23.713034 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66b4f6f898-cqrd7_f92cf54c-1bcd-4a73-86b2-e4407908953d/manager/0.log" Dec 05 11:22:23 crc kubenswrapper[4796]: I1205 11:22:23.814802 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7f8bc7fb5-pm9cz_bfe3615a-d5c7-4ad2-9e1e-ac4ab279c64f/kube-rbac-proxy/0.log" Dec 05 11:22:23 crc kubenswrapper[4796]: I1205 11:22:23.868225 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7f8bc7fb5-pm9cz_bfe3615a-d5c7-4ad2-9e1e-ac4ab279c64f/manager/0.log" Dec 05 11:22:23 crc kubenswrapper[4796]: I1205 11:22:23.938645 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79b74dfcd4-mhcb5_91dc33f2-985f-41d9-8c36-4c37aed1ec16/kube-rbac-proxy/0.log" Dec 05 11:22:24 crc kubenswrapper[4796]: I1205 11:22:24.047938 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79b74dfcd4-mhcb5_91dc33f2-985f-41d9-8c36-4c37aed1ec16/manager/0.log" Dec 05 11:22:24 crc kubenswrapper[4796]: I1205 11:22:24.092090 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6869548bb4-wsr9d_845825d1-623f-4e06-9f2c-d045910eee1a/kube-rbac-proxy/0.log" Dec 05 11:22:24 crc kubenswrapper[4796]: I1205 11:22:24.152202 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6869548bb4-wsr9d_845825d1-623f-4e06-9f2c-d045910eee1a/manager/0.log" Dec 05 11:22:24 crc kubenswrapper[4796]: I1205 11:22:24.257556 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8667b5c969stxrl_9314b19e-3947-4091-af58-82275f696602/manager/0.log" Dec 05 11:22:24 crc kubenswrapper[4796]: I1205 11:22:24.274143 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-8667b5c969stxrl_9314b19e-3947-4091-af58-82275f696602/kube-rbac-proxy/0.log" Dec 05 11:22:24 crc kubenswrapper[4796]: I1205 11:22:24.428454 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-9989f4965-wmbfg_b9a78df2-ecf6-425a-ace1-f005622e0025/kube-rbac-proxy/0.log" Dec 05 11:22:24 crc kubenswrapper[4796]: I1205 11:22:24.514423 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6b8c9fb9c8-7slmm_914b4688-6153-4282-8828-65d9500a53bf/kube-rbac-proxy/0.log" Dec 05 11:22:24 crc kubenswrapper[4796]: I1205 11:22:24.793335 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6b8c9fb9c8-7slmm_914b4688-6153-4282-8828-65d9500a53bf/operator/0.log" Dec 05 11:22:24 crc kubenswrapper[4796]: I1205 11:22:24.881367 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-gk94f_cc045b46-7b1a-4a74-9bcc-9fdf067dbf3d/registry-server/0.log" Dec 05 11:22:24 crc kubenswrapper[4796]: I1205 11:22:24.921604 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-59f9cfd57b-c586s_a1fec903-f9b8-49e1-a4f0-1526dcff64ea/kube-rbac-proxy/0.log" Dec 05 11:22:25 crc kubenswrapper[4796]: I1205 11:22:25.086294 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-589d6b8ccb-h27pk_b566972c-0250-4692-8152-31dc732b4147/kube-rbac-proxy/0.log" Dec 05 11:22:25 crc kubenswrapper[4796]: I1205 11:22:25.106934 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-59f9cfd57b-c586s_a1fec903-f9b8-49e1-a4f0-1526dcff64ea/manager/0.log" Dec 05 11:22:25 crc kubenswrapper[4796]: I1205 11:22:25.259270 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-589d6b8ccb-h27pk_b566972c-0250-4692-8152-31dc732b4147/manager/0.log" Dec 05 11:22:25 crc kubenswrapper[4796]: I1205 11:22:25.358388 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-9989f4965-wmbfg_b9a78df2-ecf6-425a-ace1-f005622e0025/manager/0.log" Dec 05 11:22:25 crc kubenswrapper[4796]: I1205 11:22:25.360441 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-mlsnn_ae34e165-a87d-4395-99c5-1a9f7129e6fe/operator/0.log" Dec 05 11:22:25 crc kubenswrapper[4796]: I1205 11:22:25.488014 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5bf496986d-rfkkm_e83951e6-5692-458d-aeba-ae8e6e8cfdd5/kube-rbac-proxy/0.log" Dec 05 11:22:25 crc kubenswrapper[4796]: I1205 11:22:25.555912 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5bf496986d-rfkkm_e83951e6-5692-458d-aeba-ae8e6e8cfdd5/manager/0.log" Dec 05 11:22:25 crc kubenswrapper[4796]: I1205 11:22:25.589122 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-75f49469b9-rs7fq_622bf26a-5bd4-4936-bd06-ae5ec514f130/kube-rbac-proxy/0.log" Dec 05 11:22:25 crc kubenswrapper[4796]: I1205 11:22:25.638669 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-75f49469b9-rs7fq_622bf26a-5bd4-4936-bd06-ae5ec514f130/manager/0.log" Dec 05 11:22:25 crc kubenswrapper[4796]: I1205 11:22:25.748961 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd78796cb-gcttm_aa5e433c-e704-4cbf-8db3-6efe20814f65/manager/0.log" Dec 05 11:22:25 crc kubenswrapper[4796]: I1205 11:22:25.810387 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd78796cb-gcttm_aa5e433c-e704-4cbf-8db3-6efe20814f65/kube-rbac-proxy/0.log" Dec 05 11:22:25 crc kubenswrapper[4796]: I1205 11:22:25.867233 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-784c978c5-v2fgq_a9f4475f-8ecc-4bc3-a195-e5cf592a1324/kube-rbac-proxy/0.log" Dec 05 11:22:25 crc kubenswrapper[4796]: I1205 11:22:25.925483 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-784c978c5-v2fgq_a9f4475f-8ecc-4bc3-a195-e5cf592a1324/manager/0.log" Dec 05 11:22:26 crc kubenswrapper[4796]: I1205 11:22:26.270772 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mx8k9" Dec 05 11:22:26 crc kubenswrapper[4796]: I1205 11:22:26.270860 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mx8k9" Dec 05 11:22:26 crc kubenswrapper[4796]: I1205 11:22:26.318652 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mx8k9" Dec 05 11:22:26 crc kubenswrapper[4796]: I1205 11:22:26.360490 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mx8k9" Dec 05 11:22:26 crc kubenswrapper[4796]: I1205 11:22:26.550292 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mx8k9"] Dec 05 11:22:28 crc kubenswrapper[4796]: I1205 11:22:28.296720 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mx8k9" podUID="9d1c307d-f5ce-4860-84a9-2f50384c2455" containerName="registry-server" containerID="cri-o://062d7abc5cd68eb1469b1d2aab91de14415a53c2748a4ed68155e73dc549f225" gracePeriod=2 Dec 05 11:22:28 crc kubenswrapper[4796]: I1205 11:22:28.711539 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mx8k9" Dec 05 11:22:28 crc kubenswrapper[4796]: I1205 11:22:28.909402 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d1c307d-f5ce-4860-84a9-2f50384c2455-catalog-content\") pod \"9d1c307d-f5ce-4860-84a9-2f50384c2455\" (UID: \"9d1c307d-f5ce-4860-84a9-2f50384c2455\") " Dec 05 11:22:28 crc kubenswrapper[4796]: I1205 11:22:28.909794 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq8f6\" (UniqueName: \"kubernetes.io/projected/9d1c307d-f5ce-4860-84a9-2f50384c2455-kube-api-access-vq8f6\") pod \"9d1c307d-f5ce-4860-84a9-2f50384c2455\" (UID: \"9d1c307d-f5ce-4860-84a9-2f50384c2455\") " Dec 05 11:22:28 crc kubenswrapper[4796]: I1205 11:22:28.909822 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d1c307d-f5ce-4860-84a9-2f50384c2455-utilities\") pod \"9d1c307d-f5ce-4860-84a9-2f50384c2455\" (UID: \"9d1c307d-f5ce-4860-84a9-2f50384c2455\") " Dec 05 11:22:28 crc kubenswrapper[4796]: I1205 11:22:28.910529 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d1c307d-f5ce-4860-84a9-2f50384c2455-utilities" (OuterVolumeSpecName: "utilities") pod "9d1c307d-f5ce-4860-84a9-2f50384c2455" (UID: "9d1c307d-f5ce-4860-84a9-2f50384c2455"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:22:28 crc kubenswrapper[4796]: I1205 11:22:28.910720 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d1c307d-f5ce-4860-84a9-2f50384c2455-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 11:22:28 crc kubenswrapper[4796]: I1205 11:22:28.916869 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d1c307d-f5ce-4860-84a9-2f50384c2455-kube-api-access-vq8f6" (OuterVolumeSpecName: "kube-api-access-vq8f6") pod "9d1c307d-f5ce-4860-84a9-2f50384c2455" (UID: "9d1c307d-f5ce-4860-84a9-2f50384c2455"). InnerVolumeSpecName "kube-api-access-vq8f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:22:28 crc kubenswrapper[4796]: I1205 11:22:28.930842 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d1c307d-f5ce-4860-84a9-2f50384c2455-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d1c307d-f5ce-4860-84a9-2f50384c2455" (UID: "9d1c307d-f5ce-4860-84a9-2f50384c2455"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:22:29 crc kubenswrapper[4796]: I1205 11:22:29.012750 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d1c307d-f5ce-4860-84a9-2f50384c2455-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 11:22:29 crc kubenswrapper[4796]: I1205 11:22:29.012782 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vq8f6\" (UniqueName: \"kubernetes.io/projected/9d1c307d-f5ce-4860-84a9-2f50384c2455-kube-api-access-vq8f6\") on node \"crc\" DevicePath \"\"" Dec 05 11:22:29 crc kubenswrapper[4796]: I1205 11:22:29.306961 4796 generic.go:334] "Generic (PLEG): container finished" podID="9d1c307d-f5ce-4860-84a9-2f50384c2455" containerID="062d7abc5cd68eb1469b1d2aab91de14415a53c2748a4ed68155e73dc549f225" exitCode=0 Dec 05 11:22:29 crc kubenswrapper[4796]: I1205 11:22:29.307017 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mx8k9" event={"ID":"9d1c307d-f5ce-4860-84a9-2f50384c2455","Type":"ContainerDied","Data":"062d7abc5cd68eb1469b1d2aab91de14415a53c2748a4ed68155e73dc549f225"} Dec 05 11:22:29 crc kubenswrapper[4796]: I1205 11:22:29.307066 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mx8k9" Dec 05 11:22:29 crc kubenswrapper[4796]: I1205 11:22:29.307085 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mx8k9" event={"ID":"9d1c307d-f5ce-4860-84a9-2f50384c2455","Type":"ContainerDied","Data":"878a2bdd6555a1082b280103d48239a76eab80a0f2fa92cee49db1195edf6e79"} Dec 05 11:22:29 crc kubenswrapper[4796]: I1205 11:22:29.307119 4796 scope.go:117] "RemoveContainer" containerID="062d7abc5cd68eb1469b1d2aab91de14415a53c2748a4ed68155e73dc549f225" Dec 05 11:22:29 crc kubenswrapper[4796]: I1205 11:22:29.335343 4796 scope.go:117] "RemoveContainer" containerID="a03d3599660a349b19822089e9303f63c3ce8be6030670547c1f1fe661effa06" Dec 05 11:22:29 crc kubenswrapper[4796]: I1205 11:22:29.336348 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mx8k9"] Dec 05 11:22:29 crc kubenswrapper[4796]: I1205 11:22:29.345215 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mx8k9"] Dec 05 11:22:29 crc kubenswrapper[4796]: I1205 11:22:29.365641 4796 scope.go:117] "RemoveContainer" containerID="28ec300ca1020ee4c8b399bcf3fb75ee77fc3b2f2382da1b2eca02682857c892" Dec 05 11:22:29 crc kubenswrapper[4796]: I1205 11:22:29.411070 4796 scope.go:117] "RemoveContainer" containerID="062d7abc5cd68eb1469b1d2aab91de14415a53c2748a4ed68155e73dc549f225" Dec 05 11:22:29 crc kubenswrapper[4796]: E1205 11:22:29.412021 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"062d7abc5cd68eb1469b1d2aab91de14415a53c2748a4ed68155e73dc549f225\": container with ID starting with 062d7abc5cd68eb1469b1d2aab91de14415a53c2748a4ed68155e73dc549f225 not found: ID does not exist" containerID="062d7abc5cd68eb1469b1d2aab91de14415a53c2748a4ed68155e73dc549f225" Dec 05 11:22:29 crc kubenswrapper[4796]: I1205 11:22:29.412062 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"062d7abc5cd68eb1469b1d2aab91de14415a53c2748a4ed68155e73dc549f225"} err="failed to get container status \"062d7abc5cd68eb1469b1d2aab91de14415a53c2748a4ed68155e73dc549f225\": rpc error: code = NotFound desc = could not find container \"062d7abc5cd68eb1469b1d2aab91de14415a53c2748a4ed68155e73dc549f225\": container with ID starting with 062d7abc5cd68eb1469b1d2aab91de14415a53c2748a4ed68155e73dc549f225 not found: ID does not exist" Dec 05 11:22:29 crc kubenswrapper[4796]: I1205 11:22:29.412095 4796 scope.go:117] "RemoveContainer" containerID="a03d3599660a349b19822089e9303f63c3ce8be6030670547c1f1fe661effa06" Dec 05 11:22:29 crc kubenswrapper[4796]: E1205 11:22:29.412348 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a03d3599660a349b19822089e9303f63c3ce8be6030670547c1f1fe661effa06\": container with ID starting with a03d3599660a349b19822089e9303f63c3ce8be6030670547c1f1fe661effa06 not found: ID does not exist" containerID="a03d3599660a349b19822089e9303f63c3ce8be6030670547c1f1fe661effa06" Dec 05 11:22:29 crc kubenswrapper[4796]: I1205 11:22:29.412384 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a03d3599660a349b19822089e9303f63c3ce8be6030670547c1f1fe661effa06"} err="failed to get container status \"a03d3599660a349b19822089e9303f63c3ce8be6030670547c1f1fe661effa06\": rpc error: code = NotFound desc = could not find container \"a03d3599660a349b19822089e9303f63c3ce8be6030670547c1f1fe661effa06\": container with ID starting with a03d3599660a349b19822089e9303f63c3ce8be6030670547c1f1fe661effa06 not found: ID does not exist" Dec 05 11:22:29 crc kubenswrapper[4796]: I1205 11:22:29.412408 4796 scope.go:117] "RemoveContainer" containerID="28ec300ca1020ee4c8b399bcf3fb75ee77fc3b2f2382da1b2eca02682857c892" Dec 05 11:22:29 crc kubenswrapper[4796]: E1205 11:22:29.412622 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28ec300ca1020ee4c8b399bcf3fb75ee77fc3b2f2382da1b2eca02682857c892\": container with ID starting with 28ec300ca1020ee4c8b399bcf3fb75ee77fc3b2f2382da1b2eca02682857c892 not found: ID does not exist" containerID="28ec300ca1020ee4c8b399bcf3fb75ee77fc3b2f2382da1b2eca02682857c892" Dec 05 11:22:29 crc kubenswrapper[4796]: I1205 11:22:29.412649 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28ec300ca1020ee4c8b399bcf3fb75ee77fc3b2f2382da1b2eca02682857c892"} err="failed to get container status \"28ec300ca1020ee4c8b399bcf3fb75ee77fc3b2f2382da1b2eca02682857c892\": rpc error: code = NotFound desc = could not find container \"28ec300ca1020ee4c8b399bcf3fb75ee77fc3b2f2382da1b2eca02682857c892\": container with ID starting with 28ec300ca1020ee4c8b399bcf3fb75ee77fc3b2f2382da1b2eca02682857c892 not found: ID does not exist" Dec 05 11:22:30 crc kubenswrapper[4796]: I1205 11:22:30.059183 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d1c307d-f5ce-4860-84a9-2f50384c2455" path="/var/lib/kubelet/pods/9d1c307d-f5ce-4860-84a9-2f50384c2455/volumes" Dec 05 11:22:41 crc kubenswrapper[4796]: I1205 11:22:41.120927 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-dktck_766637c3-bd89-4f55-950e-68c553a5c6a4/control-plane-machine-set-operator/0.log" Dec 05 11:22:41 crc kubenswrapper[4796]: I1205 11:22:41.293910 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-z2zft_77837609-281b-417a-8398-7732463eb92a/kube-rbac-proxy/0.log" Dec 05 11:22:41 crc kubenswrapper[4796]: I1205 11:22:41.294265 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-z2zft_77837609-281b-417a-8398-7732463eb92a/machine-api-operator/0.log" Dec 05 11:22:52 crc kubenswrapper[4796]: I1205 11:22:52.785855 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-znnp5_4fbc7cf7-cc54-4a42-af7c-7c7451d7bfc0/cert-manager-controller/0.log" Dec 05 11:22:52 crc kubenswrapper[4796]: I1205 11:22:52.878820 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-qvqsf_55906207-dd26-4827-a801-1808e140a903/cert-manager-cainjector/0.log" Dec 05 11:22:52 crc kubenswrapper[4796]: I1205 11:22:52.917656 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-kgfhs_dd1a1054-f7c6-4515-9c64-074ce87c169f/cert-manager-webhook/0.log" Dec 05 11:23:04 crc kubenswrapper[4796]: I1205 11:23:04.602035 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-7zkrr_5df58dfa-f2af-439b-bd54-5253e8804e10/nmstate-console-plugin/0.log" Dec 05 11:23:04 crc kubenswrapper[4796]: I1205 11:23:04.697236 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-nmcxh_3d6ead77-e785-46e2-a9d8-1cb1bf83ae85/nmstate-handler/0.log" Dec 05 11:23:04 crc kubenswrapper[4796]: I1205 11:23:04.718223 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-wlc9g_6dbc6bec-7e30-4b6e-9db8-b0ea8f210baa/kube-rbac-proxy/0.log" Dec 05 11:23:04 crc kubenswrapper[4796]: I1205 11:23:04.777615 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-wlc9g_6dbc6bec-7e30-4b6e-9db8-b0ea8f210baa/nmstate-metrics/0.log" Dec 05 11:23:04 crc kubenswrapper[4796]: I1205 11:23:04.876002 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-8pj5q_f9217359-3b50-4237-bff8-75ff7eebf333/nmstate-operator/0.log" Dec 05 11:23:04 crc kubenswrapper[4796]: I1205 11:23:04.975018 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-sflfc_b69ee833-32fa-4d5f-b561-38b4e4a89a58/nmstate-webhook/0.log" Dec 05 11:23:05 crc kubenswrapper[4796]: I1205 11:23:05.176945 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 11:23:05 crc kubenswrapper[4796]: I1205 11:23:05.177018 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 11:23:17 crc kubenswrapper[4796]: I1205 11:23:17.587333 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-rnnld_131fd9f0-c98b-45a8-9443-fb22ab2c6c28/kube-rbac-proxy/0.log" Dec 05 11:23:17 crc kubenswrapper[4796]: I1205 11:23:17.606724 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-rnnld_131fd9f0-c98b-45a8-9443-fb22ab2c6c28/controller/0.log" Dec 05 11:23:17 crc kubenswrapper[4796]: I1205 11:23:17.778252 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-frr-files/0.log" Dec 05 11:23:17 crc kubenswrapper[4796]: I1205 11:23:17.890991 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-frr-files/0.log" Dec 05 11:23:17 crc kubenswrapper[4796]: I1205 11:23:17.923368 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-metrics/0.log" Dec 05 11:23:17 crc kubenswrapper[4796]: I1205 11:23:17.942720 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-reloader/0.log" Dec 05 11:23:17 crc kubenswrapper[4796]: I1205 11:23:17.952925 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-reloader/0.log" Dec 05 11:23:18 crc kubenswrapper[4796]: I1205 11:23:18.085198 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-reloader/0.log" Dec 05 11:23:18 crc kubenswrapper[4796]: I1205 11:23:18.096100 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-metrics/0.log" Dec 05 11:23:18 crc kubenswrapper[4796]: I1205 11:23:18.108844 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-frr-files/0.log" Dec 05 11:23:18 crc kubenswrapper[4796]: I1205 11:23:18.137097 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-metrics/0.log" Dec 05 11:23:18 crc kubenswrapper[4796]: I1205 11:23:18.279553 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-metrics/0.log" Dec 05 11:23:18 crc kubenswrapper[4796]: I1205 11:23:18.289335 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-frr-files/0.log" Dec 05 11:23:18 crc kubenswrapper[4796]: I1205 11:23:18.304606 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/cp-reloader/0.log" Dec 05 11:23:18 crc kubenswrapper[4796]: I1205 11:23:18.318363 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/controller/0.log" Dec 05 11:23:18 crc kubenswrapper[4796]: I1205 11:23:18.456796 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/frr-metrics/0.log" Dec 05 11:23:18 crc kubenswrapper[4796]: I1205 11:23:18.458027 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/kube-rbac-proxy/0.log" Dec 05 11:23:18 crc kubenswrapper[4796]: I1205 11:23:18.502869 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/kube-rbac-proxy-frr/0.log" Dec 05 11:23:18 crc kubenswrapper[4796]: I1205 11:23:18.664840 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/reloader/0.log" Dec 05 11:23:18 crc kubenswrapper[4796]: I1205 11:23:18.684783 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-cvrqt_e62fb2ad-7ed8-4e09-bab4-9a5ea8b08610/frr-k8s-webhook-server/0.log" Dec 05 11:23:18 crc kubenswrapper[4796]: I1205 11:23:18.888547 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5848764dff-zxvnb_0a2ba1fb-8eb8-497a-8f31-931aca49243e/manager/0.log" Dec 05 11:23:19 crc kubenswrapper[4796]: I1205 11:23:19.055845 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-695488f64b-2qxth_249e98a9-0d33-4f5c-8102-03e46de4d20e/webhook-server/0.log" Dec 05 11:23:19 crc kubenswrapper[4796]: I1205 11:23:19.165859 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-q4znh_ad3da70e-b17d-414e-a68f-197abce5d6fe/kube-rbac-proxy/0.log" Dec 05 11:23:19 crc kubenswrapper[4796]: I1205 11:23:19.701788 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-q4znh_ad3da70e-b17d-414e-a68f-197abce5d6fe/speaker/0.log" Dec 05 11:23:19 crc kubenswrapper[4796]: I1205 11:23:19.717119 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9srrf_24fc7e38-4fa2-4f88-a1f6-99f14c1aaf63/frr/0.log" Dec 05 11:23:30 crc kubenswrapper[4796]: I1205 11:23:30.550227 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd_f393972a-bae2-4076-bc21-2f9f67d12875/util/0.log" Dec 05 11:23:30 crc kubenswrapper[4796]: I1205 11:23:30.675796 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd_f393972a-bae2-4076-bc21-2f9f67d12875/util/0.log" Dec 05 11:23:30 crc kubenswrapper[4796]: I1205 11:23:30.686829 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd_f393972a-bae2-4076-bc21-2f9f67d12875/pull/0.log" Dec 05 11:23:30 crc kubenswrapper[4796]: I1205 11:23:30.714248 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd_f393972a-bae2-4076-bc21-2f9f67d12875/pull/0.log" Dec 05 11:23:30 crc kubenswrapper[4796]: I1205 11:23:30.877915 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd_f393972a-bae2-4076-bc21-2f9f67d12875/util/0.log" Dec 05 11:23:30 crc kubenswrapper[4796]: I1205 11:23:30.879493 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd_f393972a-bae2-4076-bc21-2f9f67d12875/pull/0.log" Dec 05 11:23:30 crc kubenswrapper[4796]: I1205 11:23:30.888209 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f2wpzd_f393972a-bae2-4076-bc21-2f9f67d12875/extract/0.log" Dec 05 11:23:31 crc kubenswrapper[4796]: I1205 11:23:31.035043 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj_1aa954fc-98a6-42f8-b3c5-629859212fab/util/0.log" Dec 05 11:23:31 crc kubenswrapper[4796]: I1205 11:23:31.184440 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj_1aa954fc-98a6-42f8-b3c5-629859212fab/util/0.log" Dec 05 11:23:31 crc kubenswrapper[4796]: I1205 11:23:31.185211 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj_1aa954fc-98a6-42f8-b3c5-629859212fab/pull/0.log" Dec 05 11:23:31 crc kubenswrapper[4796]: I1205 11:23:31.211285 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj_1aa954fc-98a6-42f8-b3c5-629859212fab/pull/0.log" Dec 05 11:23:31 crc kubenswrapper[4796]: I1205 11:23:31.335889 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj_1aa954fc-98a6-42f8-b3c5-629859212fab/extract/0.log" Dec 05 11:23:31 crc kubenswrapper[4796]: I1205 11:23:31.342808 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj_1aa954fc-98a6-42f8-b3c5-629859212fab/util/0.log" Dec 05 11:23:31 crc kubenswrapper[4796]: I1205 11:23:31.363003 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83hxwtj_1aa954fc-98a6-42f8-b3c5-629859212fab/pull/0.log" Dec 05 11:23:31 crc kubenswrapper[4796]: I1205 11:23:31.497019 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gklxw_a6dc75b2-1e17-4aeb-a328-b430bd9e33a7/extract-utilities/0.log" Dec 05 11:23:31 crc kubenswrapper[4796]: I1205 11:23:31.659126 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gklxw_a6dc75b2-1e17-4aeb-a328-b430bd9e33a7/extract-utilities/0.log" Dec 05 11:23:31 crc kubenswrapper[4796]: I1205 11:23:31.662780 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gklxw_a6dc75b2-1e17-4aeb-a328-b430bd9e33a7/extract-content/0.log" Dec 05 11:23:31 crc kubenswrapper[4796]: I1205 11:23:31.671808 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gklxw_a6dc75b2-1e17-4aeb-a328-b430bd9e33a7/extract-content/0.log" Dec 05 11:23:31 crc kubenswrapper[4796]: I1205 11:23:31.814570 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gklxw_a6dc75b2-1e17-4aeb-a328-b430bd9e33a7/extract-utilities/0.log" Dec 05 11:23:31 crc kubenswrapper[4796]: I1205 11:23:31.856476 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gklxw_a6dc75b2-1e17-4aeb-a328-b430bd9e33a7/extract-content/0.log" Dec 05 11:23:32 crc kubenswrapper[4796]: I1205 11:23:32.037995 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p9cbq_b84a3c6f-5d02-4cc3-8644-14c368436983/extract-utilities/0.log" Dec 05 11:23:32 crc kubenswrapper[4796]: I1205 11:23:32.190783 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gklxw_a6dc75b2-1e17-4aeb-a328-b430bd9e33a7/registry-server/0.log" Dec 05 11:23:32 crc kubenswrapper[4796]: I1205 11:23:32.204903 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p9cbq_b84a3c6f-5d02-4cc3-8644-14c368436983/extract-content/0.log" Dec 05 11:23:32 crc kubenswrapper[4796]: I1205 11:23:32.228501 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p9cbq_b84a3c6f-5d02-4cc3-8644-14c368436983/extract-utilities/0.log" Dec 05 11:23:32 crc kubenswrapper[4796]: I1205 11:23:32.239830 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p9cbq_b84a3c6f-5d02-4cc3-8644-14c368436983/extract-content/0.log" Dec 05 11:23:32 crc kubenswrapper[4796]: I1205 11:23:32.413778 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p9cbq_b84a3c6f-5d02-4cc3-8644-14c368436983/extract-content/0.log" Dec 05 11:23:32 crc kubenswrapper[4796]: I1205 11:23:32.433368 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p9cbq_b84a3c6f-5d02-4cc3-8644-14c368436983/extract-utilities/0.log" Dec 05 11:23:32 crc kubenswrapper[4796]: I1205 11:23:32.598337 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-fdddq_c6b04e09-09cc-4ca6-a5bd-61a46535f226/marketplace-operator/0.log" Dec 05 11:23:32 crc kubenswrapper[4796]: I1205 11:23:32.745202 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qvw7s_cc02dcd1-0779-4692-8d9a-78bd5cefa3ea/extract-utilities/0.log" Dec 05 11:23:32 crc kubenswrapper[4796]: I1205 11:23:32.888787 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-p9cbq_b84a3c6f-5d02-4cc3-8644-14c368436983/registry-server/0.log" Dec 05 11:23:32 crc kubenswrapper[4796]: I1205 11:23:32.959089 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qvw7s_cc02dcd1-0779-4692-8d9a-78bd5cefa3ea/extract-content/0.log" Dec 05 11:23:32 crc kubenswrapper[4796]: I1205 11:23:32.972704 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qvw7s_cc02dcd1-0779-4692-8d9a-78bd5cefa3ea/extract-utilities/0.log" Dec 05 11:23:33 crc kubenswrapper[4796]: I1205 11:23:33.020674 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qvw7s_cc02dcd1-0779-4692-8d9a-78bd5cefa3ea/extract-content/0.log" Dec 05 11:23:33 crc kubenswrapper[4796]: I1205 11:23:33.115877 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qvw7s_cc02dcd1-0779-4692-8d9a-78bd5cefa3ea/extract-utilities/0.log" Dec 05 11:23:33 crc kubenswrapper[4796]: I1205 11:23:33.142826 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qvw7s_cc02dcd1-0779-4692-8d9a-78bd5cefa3ea/extract-content/0.log" Dec 05 11:23:33 crc kubenswrapper[4796]: I1205 11:23:33.228389 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qvw7s_cc02dcd1-0779-4692-8d9a-78bd5cefa3ea/registry-server/0.log" Dec 05 11:23:33 crc kubenswrapper[4796]: I1205 11:23:33.450551 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fj572_05818125-1b72-4058-ace6-04c114506db0/extract-utilities/0.log" Dec 05 11:23:33 crc kubenswrapper[4796]: I1205 11:23:33.575704 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fj572_05818125-1b72-4058-ace6-04c114506db0/extract-utilities/0.log" Dec 05 11:23:33 crc kubenswrapper[4796]: I1205 11:23:33.605986 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fj572_05818125-1b72-4058-ace6-04c114506db0/extract-content/0.log" Dec 05 11:23:33 crc kubenswrapper[4796]: I1205 11:23:33.611470 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fj572_05818125-1b72-4058-ace6-04c114506db0/extract-content/0.log" Dec 05 11:23:33 crc kubenswrapper[4796]: I1205 11:23:33.737399 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fj572_05818125-1b72-4058-ace6-04c114506db0/extract-utilities/0.log" Dec 05 11:23:33 crc kubenswrapper[4796]: I1205 11:23:33.742399 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fj572_05818125-1b72-4058-ace6-04c114506db0/extract-content/0.log" Dec 05 11:23:34 crc kubenswrapper[4796]: I1205 11:23:34.114759 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-fj572_05818125-1b72-4058-ace6-04c114506db0/registry-server/0.log" Dec 05 11:23:35 crc kubenswrapper[4796]: I1205 11:23:35.177321 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 11:23:35 crc kubenswrapper[4796]: I1205 11:23:35.177804 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 11:23:44 crc kubenswrapper[4796]: I1205 11:23:44.869847 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j5gcz"] Dec 05 11:23:44 crc kubenswrapper[4796]: E1205 11:23:44.870597 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d1c307d-f5ce-4860-84a9-2f50384c2455" containerName="extract-content" Dec 05 11:23:44 crc kubenswrapper[4796]: I1205 11:23:44.870614 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d1c307d-f5ce-4860-84a9-2f50384c2455" containerName="extract-content" Dec 05 11:23:44 crc kubenswrapper[4796]: E1205 11:23:44.870626 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d1c307d-f5ce-4860-84a9-2f50384c2455" containerName="registry-server" Dec 05 11:23:44 crc kubenswrapper[4796]: I1205 11:23:44.870631 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d1c307d-f5ce-4860-84a9-2f50384c2455" containerName="registry-server" Dec 05 11:23:44 crc kubenswrapper[4796]: E1205 11:23:44.870647 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d1c307d-f5ce-4860-84a9-2f50384c2455" containerName="extract-utilities" Dec 05 11:23:44 crc kubenswrapper[4796]: I1205 11:23:44.870653 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d1c307d-f5ce-4860-84a9-2f50384c2455" containerName="extract-utilities" Dec 05 11:23:44 crc kubenswrapper[4796]: I1205 11:23:44.870917 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d1c307d-f5ce-4860-84a9-2f50384c2455" containerName="registry-server" Dec 05 11:23:44 crc kubenswrapper[4796]: I1205 11:23:44.872340 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5gcz" Dec 05 11:23:44 crc kubenswrapper[4796]: I1205 11:23:44.881861 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j5gcz"] Dec 05 11:23:44 crc kubenswrapper[4796]: I1205 11:23:44.888098 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tkmz\" (UniqueName: \"kubernetes.io/projected/5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4-kube-api-access-6tkmz\") pod \"community-operators-j5gcz\" (UID: \"5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4\") " pod="openshift-marketplace/community-operators-j5gcz" Dec 05 11:23:44 crc kubenswrapper[4796]: I1205 11:23:44.888156 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4-utilities\") pod \"community-operators-j5gcz\" (UID: \"5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4\") " pod="openshift-marketplace/community-operators-j5gcz" Dec 05 11:23:44 crc kubenswrapper[4796]: I1205 11:23:44.888358 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4-catalog-content\") pod \"community-operators-j5gcz\" (UID: \"5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4\") " pod="openshift-marketplace/community-operators-j5gcz" Dec 05 11:23:44 crc kubenswrapper[4796]: I1205 11:23:44.990636 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4-catalog-content\") pod \"community-operators-j5gcz\" (UID: \"5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4\") " pod="openshift-marketplace/community-operators-j5gcz" Dec 05 11:23:44 crc kubenswrapper[4796]: I1205 11:23:44.990770 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tkmz\" (UniqueName: \"kubernetes.io/projected/5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4-kube-api-access-6tkmz\") pod \"community-operators-j5gcz\" (UID: \"5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4\") " pod="openshift-marketplace/community-operators-j5gcz" Dec 05 11:23:44 crc kubenswrapper[4796]: I1205 11:23:44.990802 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4-utilities\") pod \"community-operators-j5gcz\" (UID: \"5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4\") " pod="openshift-marketplace/community-operators-j5gcz" Dec 05 11:23:44 crc kubenswrapper[4796]: I1205 11:23:44.991222 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4-utilities\") pod \"community-operators-j5gcz\" (UID: \"5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4\") " pod="openshift-marketplace/community-operators-j5gcz" Dec 05 11:23:44 crc kubenswrapper[4796]: I1205 11:23:44.991238 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4-catalog-content\") pod \"community-operators-j5gcz\" (UID: \"5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4\") " pod="openshift-marketplace/community-operators-j5gcz" Dec 05 11:23:45 crc kubenswrapper[4796]: I1205 11:23:45.012070 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tkmz\" (UniqueName: \"kubernetes.io/projected/5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4-kube-api-access-6tkmz\") pod \"community-operators-j5gcz\" (UID: \"5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4\") " pod="openshift-marketplace/community-operators-j5gcz" Dec 05 11:23:45 crc kubenswrapper[4796]: I1205 11:23:45.198018 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5gcz" Dec 05 11:23:45 crc kubenswrapper[4796]: I1205 11:23:45.681651 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j5gcz"] Dec 05 11:23:46 crc kubenswrapper[4796]: I1205 11:23:46.034047 4796 generic.go:334] "Generic (PLEG): container finished" podID="5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4" containerID="3ea1bf37121f6c9371486273ebf31a05034683b0516404d51f8b26eaedeff189" exitCode=0 Dec 05 11:23:46 crc kubenswrapper[4796]: I1205 11:23:46.046582 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5gcz" event={"ID":"5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4","Type":"ContainerDied","Data":"3ea1bf37121f6c9371486273ebf31a05034683b0516404d51f8b26eaedeff189"} Dec 05 11:23:46 crc kubenswrapper[4796]: I1205 11:23:46.046636 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5gcz" event={"ID":"5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4","Type":"ContainerStarted","Data":"b602f6bea1059fd292fe07dc0a82af71206045e1561cacc77a9add3df8fa5bcb"} Dec 05 11:23:47 crc kubenswrapper[4796]: I1205 11:23:47.044698 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5gcz" event={"ID":"5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4","Type":"ContainerStarted","Data":"9fb6bfa823ccbaf2893b4d77ec6c28bc74f4c6279fb1d83aeeaf330f4ebc2d00"} Dec 05 11:23:48 crc kubenswrapper[4796]: I1205 11:23:48.084321 4796 generic.go:334] "Generic (PLEG): container finished" podID="5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4" containerID="9fb6bfa823ccbaf2893b4d77ec6c28bc74f4c6279fb1d83aeeaf330f4ebc2d00" exitCode=0 Dec 05 11:23:48 crc kubenswrapper[4796]: I1205 11:23:48.084639 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5gcz" event={"ID":"5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4","Type":"ContainerDied","Data":"9fb6bfa823ccbaf2893b4d77ec6c28bc74f4c6279fb1d83aeeaf330f4ebc2d00"} Dec 05 11:23:49 crc kubenswrapper[4796]: I1205 11:23:49.095523 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5gcz" event={"ID":"5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4","Type":"ContainerStarted","Data":"615b329922f9aa87e4e1177bd9a3e4eac56d7aeb7f6d843e3cb6076aaa28f48b"} Dec 05 11:23:49 crc kubenswrapper[4796]: I1205 11:23:49.117829 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j5gcz" podStartSLOduration=2.621358674 podStartE2EDuration="5.117812022s" podCreationTimestamp="2025-12-05 11:23:44 +0000 UTC" firstStartedPulling="2025-12-05 11:23:46.037064584 +0000 UTC m=+3372.325170097" lastFinishedPulling="2025-12-05 11:23:48.533517931 +0000 UTC m=+3374.821623445" observedRunningTime="2025-12-05 11:23:49.109781803 +0000 UTC m=+3375.397887316" watchObservedRunningTime="2025-12-05 11:23:49.117812022 +0000 UTC m=+3375.405917535" Dec 05 11:23:55 crc kubenswrapper[4796]: I1205 11:23:55.198611 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j5gcz" Dec 05 11:23:55 crc kubenswrapper[4796]: I1205 11:23:55.199270 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j5gcz" Dec 05 11:23:55 crc kubenswrapper[4796]: I1205 11:23:55.246309 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j5gcz" Dec 05 11:23:56 crc kubenswrapper[4796]: I1205 11:23:56.225254 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j5gcz" Dec 05 11:23:56 crc kubenswrapper[4796]: I1205 11:23:56.284496 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j5gcz"] Dec 05 11:23:58 crc kubenswrapper[4796]: I1205 11:23:58.194400 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j5gcz" podUID="5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4" containerName="registry-server" containerID="cri-o://615b329922f9aa87e4e1177bd9a3e4eac56d7aeb7f6d843e3cb6076aaa28f48b" gracePeriod=2 Dec 05 11:23:58 crc kubenswrapper[4796]: I1205 11:23:58.680161 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5gcz" Dec 05 11:23:58 crc kubenswrapper[4796]: I1205 11:23:58.801559 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tkmz\" (UniqueName: \"kubernetes.io/projected/5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4-kube-api-access-6tkmz\") pod \"5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4\" (UID: \"5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4\") " Dec 05 11:23:58 crc kubenswrapper[4796]: I1205 11:23:58.801747 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4-utilities\") pod \"5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4\" (UID: \"5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4\") " Dec 05 11:23:58 crc kubenswrapper[4796]: I1205 11:23:58.801923 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4-catalog-content\") pod \"5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4\" (UID: \"5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4\") " Dec 05 11:23:58 crc kubenswrapper[4796]: I1205 11:23:58.802494 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4-utilities" (OuterVolumeSpecName: "utilities") pod "5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4" (UID: "5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:23:58 crc kubenswrapper[4796]: I1205 11:23:58.802935 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 11:23:58 crc kubenswrapper[4796]: I1205 11:23:58.810043 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4-kube-api-access-6tkmz" (OuterVolumeSpecName: "kube-api-access-6tkmz") pod "5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4" (UID: "5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4"). InnerVolumeSpecName "kube-api-access-6tkmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:23:58 crc kubenswrapper[4796]: I1205 11:23:58.848817 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4" (UID: "5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:23:58 crc kubenswrapper[4796]: I1205 11:23:58.904771 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tkmz\" (UniqueName: \"kubernetes.io/projected/5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4-kube-api-access-6tkmz\") on node \"crc\" DevicePath \"\"" Dec 05 11:23:58 crc kubenswrapper[4796]: I1205 11:23:58.904810 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 11:23:59 crc kubenswrapper[4796]: I1205 11:23:59.204535 4796 generic.go:334] "Generic (PLEG): container finished" podID="5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4" containerID="615b329922f9aa87e4e1177bd9a3e4eac56d7aeb7f6d843e3cb6076aaa28f48b" exitCode=0 Dec 05 11:23:59 crc kubenswrapper[4796]: I1205 11:23:59.204575 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5gcz" event={"ID":"5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4","Type":"ContainerDied","Data":"615b329922f9aa87e4e1177bd9a3e4eac56d7aeb7f6d843e3cb6076aaa28f48b"} Dec 05 11:23:59 crc kubenswrapper[4796]: I1205 11:23:59.204599 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j5gcz" event={"ID":"5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4","Type":"ContainerDied","Data":"b602f6bea1059fd292fe07dc0a82af71206045e1561cacc77a9add3df8fa5bcb"} Dec 05 11:23:59 crc kubenswrapper[4796]: I1205 11:23:59.204618 4796 scope.go:117] "RemoveContainer" containerID="615b329922f9aa87e4e1177bd9a3e4eac56d7aeb7f6d843e3cb6076aaa28f48b" Dec 05 11:23:59 crc kubenswrapper[4796]: I1205 11:23:59.204739 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j5gcz" Dec 05 11:23:59 crc kubenswrapper[4796]: I1205 11:23:59.237197 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j5gcz"] Dec 05 11:23:59 crc kubenswrapper[4796]: I1205 11:23:59.237283 4796 scope.go:117] "RemoveContainer" containerID="9fb6bfa823ccbaf2893b4d77ec6c28bc74f4c6279fb1d83aeeaf330f4ebc2d00" Dec 05 11:23:59 crc kubenswrapper[4796]: I1205 11:23:59.258795 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j5gcz"] Dec 05 11:23:59 crc kubenswrapper[4796]: I1205 11:23:59.269958 4796 scope.go:117] "RemoveContainer" containerID="3ea1bf37121f6c9371486273ebf31a05034683b0516404d51f8b26eaedeff189" Dec 05 11:23:59 crc kubenswrapper[4796]: I1205 11:23:59.302715 4796 scope.go:117] "RemoveContainer" containerID="615b329922f9aa87e4e1177bd9a3e4eac56d7aeb7f6d843e3cb6076aaa28f48b" Dec 05 11:23:59 crc kubenswrapper[4796]: E1205 11:23:59.303470 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"615b329922f9aa87e4e1177bd9a3e4eac56d7aeb7f6d843e3cb6076aaa28f48b\": container with ID starting with 615b329922f9aa87e4e1177bd9a3e4eac56d7aeb7f6d843e3cb6076aaa28f48b not found: ID does not exist" containerID="615b329922f9aa87e4e1177bd9a3e4eac56d7aeb7f6d843e3cb6076aaa28f48b" Dec 05 11:23:59 crc kubenswrapper[4796]: I1205 11:23:59.303525 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"615b329922f9aa87e4e1177bd9a3e4eac56d7aeb7f6d843e3cb6076aaa28f48b"} err="failed to get container status \"615b329922f9aa87e4e1177bd9a3e4eac56d7aeb7f6d843e3cb6076aaa28f48b\": rpc error: code = NotFound desc = could not find container \"615b329922f9aa87e4e1177bd9a3e4eac56d7aeb7f6d843e3cb6076aaa28f48b\": container with ID starting with 615b329922f9aa87e4e1177bd9a3e4eac56d7aeb7f6d843e3cb6076aaa28f48b not found: ID does not exist" Dec 05 11:23:59 crc kubenswrapper[4796]: I1205 11:23:59.303559 4796 scope.go:117] "RemoveContainer" containerID="9fb6bfa823ccbaf2893b4d77ec6c28bc74f4c6279fb1d83aeeaf330f4ebc2d00" Dec 05 11:23:59 crc kubenswrapper[4796]: E1205 11:23:59.303958 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fb6bfa823ccbaf2893b4d77ec6c28bc74f4c6279fb1d83aeeaf330f4ebc2d00\": container with ID starting with 9fb6bfa823ccbaf2893b4d77ec6c28bc74f4c6279fb1d83aeeaf330f4ebc2d00 not found: ID does not exist" containerID="9fb6bfa823ccbaf2893b4d77ec6c28bc74f4c6279fb1d83aeeaf330f4ebc2d00" Dec 05 11:23:59 crc kubenswrapper[4796]: I1205 11:23:59.303995 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fb6bfa823ccbaf2893b4d77ec6c28bc74f4c6279fb1d83aeeaf330f4ebc2d00"} err="failed to get container status \"9fb6bfa823ccbaf2893b4d77ec6c28bc74f4c6279fb1d83aeeaf330f4ebc2d00\": rpc error: code = NotFound desc = could not find container \"9fb6bfa823ccbaf2893b4d77ec6c28bc74f4c6279fb1d83aeeaf330f4ebc2d00\": container with ID starting with 9fb6bfa823ccbaf2893b4d77ec6c28bc74f4c6279fb1d83aeeaf330f4ebc2d00 not found: ID does not exist" Dec 05 11:23:59 crc kubenswrapper[4796]: I1205 11:23:59.304019 4796 scope.go:117] "RemoveContainer" containerID="3ea1bf37121f6c9371486273ebf31a05034683b0516404d51f8b26eaedeff189" Dec 05 11:23:59 crc kubenswrapper[4796]: E1205 11:23:59.304336 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ea1bf37121f6c9371486273ebf31a05034683b0516404d51f8b26eaedeff189\": container with ID starting with 3ea1bf37121f6c9371486273ebf31a05034683b0516404d51f8b26eaedeff189 not found: ID does not exist" containerID="3ea1bf37121f6c9371486273ebf31a05034683b0516404d51f8b26eaedeff189" Dec 05 11:23:59 crc kubenswrapper[4796]: I1205 11:23:59.304372 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ea1bf37121f6c9371486273ebf31a05034683b0516404d51f8b26eaedeff189"} err="failed to get container status \"3ea1bf37121f6c9371486273ebf31a05034683b0516404d51f8b26eaedeff189\": rpc error: code = NotFound desc = could not find container \"3ea1bf37121f6c9371486273ebf31a05034683b0516404d51f8b26eaedeff189\": container with ID starting with 3ea1bf37121f6c9371486273ebf31a05034683b0516404d51f8b26eaedeff189 not found: ID does not exist" Dec 05 11:24:00 crc kubenswrapper[4796]: I1205 11:24:00.039728 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4" path="/var/lib/kubelet/pods/5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4/volumes" Dec 05 11:24:05 crc kubenswrapper[4796]: I1205 11:24:05.177866 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 11:24:05 crc kubenswrapper[4796]: I1205 11:24:05.179416 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 11:24:05 crc kubenswrapper[4796]: I1205 11:24:05.179552 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 11:24:05 crc kubenswrapper[4796]: I1205 11:24:05.180171 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"33bec1cb577def4fbb5cf2e2c123c52030235cb0ae4e0e573704cf5ac833978e"} pod="openshift-machine-config-operator/machine-config-daemon-9pllw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 11:24:05 crc kubenswrapper[4796]: I1205 11:24:05.180302 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" containerID="cri-o://33bec1cb577def4fbb5cf2e2c123c52030235cb0ae4e0e573704cf5ac833978e" gracePeriod=600 Dec 05 11:24:06 crc kubenswrapper[4796]: I1205 11:24:06.297727 4796 generic.go:334] "Generic (PLEG): container finished" podID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerID="33bec1cb577def4fbb5cf2e2c123c52030235cb0ae4e0e573704cf5ac833978e" exitCode=0 Dec 05 11:24:06 crc kubenswrapper[4796]: I1205 11:24:06.297798 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerDied","Data":"33bec1cb577def4fbb5cf2e2c123c52030235cb0ae4e0e573704cf5ac833978e"} Dec 05 11:24:06 crc kubenswrapper[4796]: I1205 11:24:06.298395 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerStarted","Data":"e35c9f3deeee76a1131b02454ed30af40fac498b96ce3d37697e8ff5a4aca12a"} Dec 05 11:24:06 crc kubenswrapper[4796]: I1205 11:24:06.298421 4796 scope.go:117] "RemoveContainer" containerID="c71f6775f39e5eb76507846814a853f0723896df37f75d98e39dad5f1963a5af" Dec 05 11:24:55 crc kubenswrapper[4796]: I1205 11:24:55.375046 4796 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gcz6t"] Dec 05 11:24:55 crc kubenswrapper[4796]: E1205 11:24:55.376006 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4" containerName="extract-utilities" Dec 05 11:24:55 crc kubenswrapper[4796]: I1205 11:24:55.376022 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4" containerName="extract-utilities" Dec 05 11:24:55 crc kubenswrapper[4796]: E1205 11:24:55.376036 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4" containerName="registry-server" Dec 05 11:24:55 crc kubenswrapper[4796]: I1205 11:24:55.376041 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4" containerName="registry-server" Dec 05 11:24:55 crc kubenswrapper[4796]: E1205 11:24:55.376064 4796 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4" containerName="extract-content" Dec 05 11:24:55 crc kubenswrapper[4796]: I1205 11:24:55.376069 4796 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4" containerName="extract-content" Dec 05 11:24:55 crc kubenswrapper[4796]: I1205 11:24:55.376247 4796 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cfbc9dd-4626-4f3c-95d1-13efd2f0e7b4" containerName="registry-server" Dec 05 11:24:55 crc kubenswrapper[4796]: I1205 11:24:55.377575 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gcz6t" Dec 05 11:24:55 crc kubenswrapper[4796]: I1205 11:24:55.385863 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gcz6t"] Dec 05 11:24:55 crc kubenswrapper[4796]: I1205 11:24:55.504716 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzxh5\" (UniqueName: \"kubernetes.io/projected/82534c3e-575a-4453-8939-34872d8606f4-kube-api-access-vzxh5\") pod \"redhat-operators-gcz6t\" (UID: \"82534c3e-575a-4453-8939-34872d8606f4\") " pod="openshift-marketplace/redhat-operators-gcz6t" Dec 05 11:24:55 crc kubenswrapper[4796]: I1205 11:24:55.505170 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82534c3e-575a-4453-8939-34872d8606f4-utilities\") pod \"redhat-operators-gcz6t\" (UID: \"82534c3e-575a-4453-8939-34872d8606f4\") " pod="openshift-marketplace/redhat-operators-gcz6t" Dec 05 11:24:55 crc kubenswrapper[4796]: I1205 11:24:55.505366 4796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82534c3e-575a-4453-8939-34872d8606f4-catalog-content\") pod \"redhat-operators-gcz6t\" (UID: \"82534c3e-575a-4453-8939-34872d8606f4\") " pod="openshift-marketplace/redhat-operators-gcz6t" Dec 05 11:24:55 crc kubenswrapper[4796]: I1205 11:24:55.608153 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzxh5\" (UniqueName: \"kubernetes.io/projected/82534c3e-575a-4453-8939-34872d8606f4-kube-api-access-vzxh5\") pod \"redhat-operators-gcz6t\" (UID: \"82534c3e-575a-4453-8939-34872d8606f4\") " pod="openshift-marketplace/redhat-operators-gcz6t" Dec 05 11:24:55 crc kubenswrapper[4796]: I1205 11:24:55.608329 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82534c3e-575a-4453-8939-34872d8606f4-utilities\") pod \"redhat-operators-gcz6t\" (UID: \"82534c3e-575a-4453-8939-34872d8606f4\") " pod="openshift-marketplace/redhat-operators-gcz6t" Dec 05 11:24:55 crc kubenswrapper[4796]: I1205 11:24:55.608419 4796 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82534c3e-575a-4453-8939-34872d8606f4-catalog-content\") pod \"redhat-operators-gcz6t\" (UID: \"82534c3e-575a-4453-8939-34872d8606f4\") " pod="openshift-marketplace/redhat-operators-gcz6t" Dec 05 11:24:55 crc kubenswrapper[4796]: I1205 11:24:55.609165 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82534c3e-575a-4453-8939-34872d8606f4-catalog-content\") pod \"redhat-operators-gcz6t\" (UID: \"82534c3e-575a-4453-8939-34872d8606f4\") " pod="openshift-marketplace/redhat-operators-gcz6t" Dec 05 11:24:55 crc kubenswrapper[4796]: I1205 11:24:55.609258 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82534c3e-575a-4453-8939-34872d8606f4-utilities\") pod \"redhat-operators-gcz6t\" (UID: \"82534c3e-575a-4453-8939-34872d8606f4\") " pod="openshift-marketplace/redhat-operators-gcz6t" Dec 05 11:24:55 crc kubenswrapper[4796]: I1205 11:24:55.628889 4796 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzxh5\" (UniqueName: \"kubernetes.io/projected/82534c3e-575a-4453-8939-34872d8606f4-kube-api-access-vzxh5\") pod \"redhat-operators-gcz6t\" (UID: \"82534c3e-575a-4453-8939-34872d8606f4\") " pod="openshift-marketplace/redhat-operators-gcz6t" Dec 05 11:24:55 crc kubenswrapper[4796]: I1205 11:24:55.694190 4796 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gcz6t" Dec 05 11:24:56 crc kubenswrapper[4796]: I1205 11:24:56.025709 4796 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gcz6t"] Dec 05 11:24:56 crc kubenswrapper[4796]: I1205 11:24:56.847047 4796 generic.go:334] "Generic (PLEG): container finished" podID="86b2edc7-4753-4ca7-bb2d-1439505ffd8e" containerID="c3a181f9c510289ffe1e650731af5185874a11a05a34f2efb69c1b682eb13ee4" exitCode=0 Dec 05 11:24:56 crc kubenswrapper[4796]: I1205 11:24:56.847143 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vbfc7/must-gather-pwxxs" event={"ID":"86b2edc7-4753-4ca7-bb2d-1439505ffd8e","Type":"ContainerDied","Data":"c3a181f9c510289ffe1e650731af5185874a11a05a34f2efb69c1b682eb13ee4"} Dec 05 11:24:56 crc kubenswrapper[4796]: I1205 11:24:56.849771 4796 scope.go:117] "RemoveContainer" containerID="c3a181f9c510289ffe1e650731af5185874a11a05a34f2efb69c1b682eb13ee4" Dec 05 11:24:56 crc kubenswrapper[4796]: I1205 11:24:56.851258 4796 generic.go:334] "Generic (PLEG): container finished" podID="82534c3e-575a-4453-8939-34872d8606f4" containerID="f796f16d92ff65f4a9d3b5ed9340dc997b0fa519325b81e923202805268b8952" exitCode=0 Dec 05 11:24:56 crc kubenswrapper[4796]: I1205 11:24:56.851311 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcz6t" event={"ID":"82534c3e-575a-4453-8939-34872d8606f4","Type":"ContainerDied","Data":"f796f16d92ff65f4a9d3b5ed9340dc997b0fa519325b81e923202805268b8952"} Dec 05 11:24:56 crc kubenswrapper[4796]: I1205 11:24:56.851352 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcz6t" event={"ID":"82534c3e-575a-4453-8939-34872d8606f4","Type":"ContainerStarted","Data":"fbdcb2530a0d925aa88b2cd38eadfd3b52efd049049609e1fd16220f0aef2a24"} Dec 05 11:24:57 crc kubenswrapper[4796]: I1205 11:24:57.599634 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vbfc7_must-gather-pwxxs_86b2edc7-4753-4ca7-bb2d-1439505ffd8e/gather/0.log" Dec 05 11:24:57 crc kubenswrapper[4796]: I1205 11:24:57.862004 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcz6t" event={"ID":"82534c3e-575a-4453-8939-34872d8606f4","Type":"ContainerStarted","Data":"25c494662205527a19a3d3ece796e3206ec44e29cb4d2e3d8d3236af0c8245b9"} Dec 05 11:24:58 crc kubenswrapper[4796]: I1205 11:24:58.873822 4796 generic.go:334] "Generic (PLEG): container finished" podID="82534c3e-575a-4453-8939-34872d8606f4" containerID="25c494662205527a19a3d3ece796e3206ec44e29cb4d2e3d8d3236af0c8245b9" exitCode=0 Dec 05 11:24:58 crc kubenswrapper[4796]: I1205 11:24:58.873932 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcz6t" event={"ID":"82534c3e-575a-4453-8939-34872d8606f4","Type":"ContainerDied","Data":"25c494662205527a19a3d3ece796e3206ec44e29cb4d2e3d8d3236af0c8245b9"} Dec 05 11:24:59 crc kubenswrapper[4796]: I1205 11:24:59.887830 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcz6t" event={"ID":"82534c3e-575a-4453-8939-34872d8606f4","Type":"ContainerStarted","Data":"0b7abc9287af02e992e706169dc55f2650eafeb66c2dfd899d5c4ccbd2bcc388"} Dec 05 11:24:59 crc kubenswrapper[4796]: I1205 11:24:59.917121 4796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gcz6t" podStartSLOduration=2.271933327 podStartE2EDuration="4.91709588s" podCreationTimestamp="2025-12-05 11:24:55 +0000 UTC" firstStartedPulling="2025-12-05 11:24:56.852981379 +0000 UTC m=+3443.141086892" lastFinishedPulling="2025-12-05 11:24:59.498143932 +0000 UTC m=+3445.786249445" observedRunningTime="2025-12-05 11:24:59.90815023 +0000 UTC m=+3446.196255744" watchObservedRunningTime="2025-12-05 11:24:59.91709588 +0000 UTC m=+3446.205201393" Dec 05 11:25:05 crc kubenswrapper[4796]: I1205 11:25:05.694342 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gcz6t" Dec 05 11:25:05 crc kubenswrapper[4796]: I1205 11:25:05.695993 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gcz6t" Dec 05 11:25:05 crc kubenswrapper[4796]: I1205 11:25:05.738493 4796 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gcz6t" Dec 05 11:25:05 crc kubenswrapper[4796]: I1205 11:25:05.997496 4796 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gcz6t" Dec 05 11:25:06 crc kubenswrapper[4796]: I1205 11:25:06.051965 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gcz6t"] Dec 05 11:25:06 crc kubenswrapper[4796]: I1205 11:25:06.405036 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vbfc7/must-gather-pwxxs"] Dec 05 11:25:06 crc kubenswrapper[4796]: I1205 11:25:06.405372 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vbfc7/must-gather-pwxxs" podUID="86b2edc7-4753-4ca7-bb2d-1439505ffd8e" containerName="copy" containerID="cri-o://5f9dfc3538dff1c102b7637c399e249da00e980aeb071110a91d329d4a7f46d6" gracePeriod=2 Dec 05 11:25:06 crc kubenswrapper[4796]: I1205 11:25:06.417365 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vbfc7/must-gather-pwxxs"] Dec 05 11:25:06 crc kubenswrapper[4796]: I1205 11:25:06.844660 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vbfc7_must-gather-pwxxs_86b2edc7-4753-4ca7-bb2d-1439505ffd8e/copy/0.log" Dec 05 11:25:06 crc kubenswrapper[4796]: I1205 11:25:06.846128 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vbfc7/must-gather-pwxxs" Dec 05 11:25:06 crc kubenswrapper[4796]: I1205 11:25:06.958721 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/86b2edc7-4753-4ca7-bb2d-1439505ffd8e-must-gather-output\") pod \"86b2edc7-4753-4ca7-bb2d-1439505ffd8e\" (UID: \"86b2edc7-4753-4ca7-bb2d-1439505ffd8e\") " Dec 05 11:25:06 crc kubenswrapper[4796]: I1205 11:25:06.959052 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv982\" (UniqueName: \"kubernetes.io/projected/86b2edc7-4753-4ca7-bb2d-1439505ffd8e-kube-api-access-qv982\") pod \"86b2edc7-4753-4ca7-bb2d-1439505ffd8e\" (UID: \"86b2edc7-4753-4ca7-bb2d-1439505ffd8e\") " Dec 05 11:25:06 crc kubenswrapper[4796]: I1205 11:25:06.965680 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86b2edc7-4753-4ca7-bb2d-1439505ffd8e-kube-api-access-qv982" (OuterVolumeSpecName: "kube-api-access-qv982") pod "86b2edc7-4753-4ca7-bb2d-1439505ffd8e" (UID: "86b2edc7-4753-4ca7-bb2d-1439505ffd8e"). InnerVolumeSpecName "kube-api-access-qv982". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:25:06 crc kubenswrapper[4796]: I1205 11:25:06.967807 4796 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vbfc7_must-gather-pwxxs_86b2edc7-4753-4ca7-bb2d-1439505ffd8e/copy/0.log" Dec 05 11:25:06 crc kubenswrapper[4796]: I1205 11:25:06.968829 4796 generic.go:334] "Generic (PLEG): container finished" podID="86b2edc7-4753-4ca7-bb2d-1439505ffd8e" containerID="5f9dfc3538dff1c102b7637c399e249da00e980aeb071110a91d329d4a7f46d6" exitCode=143 Dec 05 11:25:06 crc kubenswrapper[4796]: I1205 11:25:06.969344 4796 scope.go:117] "RemoveContainer" containerID="5f9dfc3538dff1c102b7637c399e249da00e980aeb071110a91d329d4a7f46d6" Dec 05 11:25:06 crc kubenswrapper[4796]: I1205 11:25:06.969358 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vbfc7/must-gather-pwxxs" Dec 05 11:25:07 crc kubenswrapper[4796]: I1205 11:25:07.016644 4796 scope.go:117] "RemoveContainer" containerID="c3a181f9c510289ffe1e650731af5185874a11a05a34f2efb69c1b682eb13ee4" Dec 05 11:25:07 crc kubenswrapper[4796]: I1205 11:25:07.062159 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv982\" (UniqueName: \"kubernetes.io/projected/86b2edc7-4753-4ca7-bb2d-1439505ffd8e-kube-api-access-qv982\") on node \"crc\" DevicePath \"\"" Dec 05 11:25:07 crc kubenswrapper[4796]: I1205 11:25:07.073882 4796 scope.go:117] "RemoveContainer" containerID="5f9dfc3538dff1c102b7637c399e249da00e980aeb071110a91d329d4a7f46d6" Dec 05 11:25:07 crc kubenswrapper[4796]: E1205 11:25:07.074326 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f9dfc3538dff1c102b7637c399e249da00e980aeb071110a91d329d4a7f46d6\": container with ID starting with 5f9dfc3538dff1c102b7637c399e249da00e980aeb071110a91d329d4a7f46d6 not found: ID does not exist" containerID="5f9dfc3538dff1c102b7637c399e249da00e980aeb071110a91d329d4a7f46d6" Dec 05 11:25:07 crc kubenswrapper[4796]: I1205 11:25:07.074368 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f9dfc3538dff1c102b7637c399e249da00e980aeb071110a91d329d4a7f46d6"} err="failed to get container status \"5f9dfc3538dff1c102b7637c399e249da00e980aeb071110a91d329d4a7f46d6\": rpc error: code = NotFound desc = could not find container \"5f9dfc3538dff1c102b7637c399e249da00e980aeb071110a91d329d4a7f46d6\": container with ID starting with 5f9dfc3538dff1c102b7637c399e249da00e980aeb071110a91d329d4a7f46d6 not found: ID does not exist" Dec 05 11:25:07 crc kubenswrapper[4796]: I1205 11:25:07.074395 4796 scope.go:117] "RemoveContainer" containerID="c3a181f9c510289ffe1e650731af5185874a11a05a34f2efb69c1b682eb13ee4" Dec 05 11:25:07 crc kubenswrapper[4796]: E1205 11:25:07.074659 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3a181f9c510289ffe1e650731af5185874a11a05a34f2efb69c1b682eb13ee4\": container with ID starting with c3a181f9c510289ffe1e650731af5185874a11a05a34f2efb69c1b682eb13ee4 not found: ID does not exist" containerID="c3a181f9c510289ffe1e650731af5185874a11a05a34f2efb69c1b682eb13ee4" Dec 05 11:25:07 crc kubenswrapper[4796]: I1205 11:25:07.074693 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3a181f9c510289ffe1e650731af5185874a11a05a34f2efb69c1b682eb13ee4"} err="failed to get container status \"c3a181f9c510289ffe1e650731af5185874a11a05a34f2efb69c1b682eb13ee4\": rpc error: code = NotFound desc = could not find container \"c3a181f9c510289ffe1e650731af5185874a11a05a34f2efb69c1b682eb13ee4\": container with ID starting with c3a181f9c510289ffe1e650731af5185874a11a05a34f2efb69c1b682eb13ee4 not found: ID does not exist" Dec 05 11:25:07 crc kubenswrapper[4796]: I1205 11:25:07.110946 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86b2edc7-4753-4ca7-bb2d-1439505ffd8e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "86b2edc7-4753-4ca7-bb2d-1439505ffd8e" (UID: "86b2edc7-4753-4ca7-bb2d-1439505ffd8e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:25:07 crc kubenswrapper[4796]: I1205 11:25:07.165118 4796 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/86b2edc7-4753-4ca7-bb2d-1439505ffd8e-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 05 11:25:07 crc kubenswrapper[4796]: I1205 11:25:07.979525 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gcz6t" podUID="82534c3e-575a-4453-8939-34872d8606f4" containerName="registry-server" containerID="cri-o://0b7abc9287af02e992e706169dc55f2650eafeb66c2dfd899d5c4ccbd2bcc388" gracePeriod=2 Dec 05 11:25:08 crc kubenswrapper[4796]: I1205 11:25:08.041217 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86b2edc7-4753-4ca7-bb2d-1439505ffd8e" path="/var/lib/kubelet/pods/86b2edc7-4753-4ca7-bb2d-1439505ffd8e/volumes" Dec 05 11:25:08 crc kubenswrapper[4796]: I1205 11:25:08.359322 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gcz6t" Dec 05 11:25:08 crc kubenswrapper[4796]: I1205 11:25:08.489967 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82534c3e-575a-4453-8939-34872d8606f4-utilities\") pod \"82534c3e-575a-4453-8939-34872d8606f4\" (UID: \"82534c3e-575a-4453-8939-34872d8606f4\") " Dec 05 11:25:08 crc kubenswrapper[4796]: I1205 11:25:08.490047 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82534c3e-575a-4453-8939-34872d8606f4-catalog-content\") pod \"82534c3e-575a-4453-8939-34872d8606f4\" (UID: \"82534c3e-575a-4453-8939-34872d8606f4\") " Dec 05 11:25:08 crc kubenswrapper[4796]: I1205 11:25:08.490097 4796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzxh5\" (UniqueName: \"kubernetes.io/projected/82534c3e-575a-4453-8939-34872d8606f4-kube-api-access-vzxh5\") pod \"82534c3e-575a-4453-8939-34872d8606f4\" (UID: \"82534c3e-575a-4453-8939-34872d8606f4\") " Dec 05 11:25:08 crc kubenswrapper[4796]: I1205 11:25:08.490857 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82534c3e-575a-4453-8939-34872d8606f4-utilities" (OuterVolumeSpecName: "utilities") pod "82534c3e-575a-4453-8939-34872d8606f4" (UID: "82534c3e-575a-4453-8939-34872d8606f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:25:08 crc kubenswrapper[4796]: I1205 11:25:08.496604 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82534c3e-575a-4453-8939-34872d8606f4-kube-api-access-vzxh5" (OuterVolumeSpecName: "kube-api-access-vzxh5") pod "82534c3e-575a-4453-8939-34872d8606f4" (UID: "82534c3e-575a-4453-8939-34872d8606f4"). InnerVolumeSpecName "kube-api-access-vzxh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 05 11:25:08 crc kubenswrapper[4796]: I1205 11:25:08.591358 4796 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82534c3e-575a-4453-8939-34872d8606f4-utilities\") on node \"crc\" DevicePath \"\"" Dec 05 11:25:08 crc kubenswrapper[4796]: I1205 11:25:08.591385 4796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzxh5\" (UniqueName: \"kubernetes.io/projected/82534c3e-575a-4453-8939-34872d8606f4-kube-api-access-vzxh5\") on node \"crc\" DevicePath \"\"" Dec 05 11:25:09 crc kubenswrapper[4796]: I1205 11:25:09.009111 4796 generic.go:334] "Generic (PLEG): container finished" podID="82534c3e-575a-4453-8939-34872d8606f4" containerID="0b7abc9287af02e992e706169dc55f2650eafeb66c2dfd899d5c4ccbd2bcc388" exitCode=0 Dec 05 11:25:09 crc kubenswrapper[4796]: I1205 11:25:09.009160 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcz6t" event={"ID":"82534c3e-575a-4453-8939-34872d8606f4","Type":"ContainerDied","Data":"0b7abc9287af02e992e706169dc55f2650eafeb66c2dfd899d5c4ccbd2bcc388"} Dec 05 11:25:09 crc kubenswrapper[4796]: I1205 11:25:09.009186 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcz6t" event={"ID":"82534c3e-575a-4453-8939-34872d8606f4","Type":"ContainerDied","Data":"fbdcb2530a0d925aa88b2cd38eadfd3b52efd049049609e1fd16220f0aef2a24"} Dec 05 11:25:09 crc kubenswrapper[4796]: I1205 11:25:09.009204 4796 scope.go:117] "RemoveContainer" containerID="0b7abc9287af02e992e706169dc55f2650eafeb66c2dfd899d5c4ccbd2bcc388" Dec 05 11:25:09 crc kubenswrapper[4796]: I1205 11:25:09.009263 4796 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gcz6t" Dec 05 11:25:09 crc kubenswrapper[4796]: I1205 11:25:09.028722 4796 scope.go:117] "RemoveContainer" containerID="25c494662205527a19a3d3ece796e3206ec44e29cb4d2e3d8d3236af0c8245b9" Dec 05 11:25:09 crc kubenswrapper[4796]: I1205 11:25:09.044752 4796 scope.go:117] "RemoveContainer" containerID="f796f16d92ff65f4a9d3b5ed9340dc997b0fa519325b81e923202805268b8952" Dec 05 11:25:09 crc kubenswrapper[4796]: I1205 11:25:09.078546 4796 scope.go:117] "RemoveContainer" containerID="0b7abc9287af02e992e706169dc55f2650eafeb66c2dfd899d5c4ccbd2bcc388" Dec 05 11:25:09 crc kubenswrapper[4796]: E1205 11:25:09.079284 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b7abc9287af02e992e706169dc55f2650eafeb66c2dfd899d5c4ccbd2bcc388\": container with ID starting with 0b7abc9287af02e992e706169dc55f2650eafeb66c2dfd899d5c4ccbd2bcc388 not found: ID does not exist" containerID="0b7abc9287af02e992e706169dc55f2650eafeb66c2dfd899d5c4ccbd2bcc388" Dec 05 11:25:09 crc kubenswrapper[4796]: I1205 11:25:09.079324 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b7abc9287af02e992e706169dc55f2650eafeb66c2dfd899d5c4ccbd2bcc388"} err="failed to get container status \"0b7abc9287af02e992e706169dc55f2650eafeb66c2dfd899d5c4ccbd2bcc388\": rpc error: code = NotFound desc = could not find container \"0b7abc9287af02e992e706169dc55f2650eafeb66c2dfd899d5c4ccbd2bcc388\": container with ID starting with 0b7abc9287af02e992e706169dc55f2650eafeb66c2dfd899d5c4ccbd2bcc388 not found: ID does not exist" Dec 05 11:25:09 crc kubenswrapper[4796]: I1205 11:25:09.079361 4796 scope.go:117] "RemoveContainer" containerID="25c494662205527a19a3d3ece796e3206ec44e29cb4d2e3d8d3236af0c8245b9" Dec 05 11:25:09 crc kubenswrapper[4796]: E1205 11:25:09.079608 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25c494662205527a19a3d3ece796e3206ec44e29cb4d2e3d8d3236af0c8245b9\": container with ID starting with 25c494662205527a19a3d3ece796e3206ec44e29cb4d2e3d8d3236af0c8245b9 not found: ID does not exist" containerID="25c494662205527a19a3d3ece796e3206ec44e29cb4d2e3d8d3236af0c8245b9" Dec 05 11:25:09 crc kubenswrapper[4796]: I1205 11:25:09.079634 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c494662205527a19a3d3ece796e3206ec44e29cb4d2e3d8d3236af0c8245b9"} err="failed to get container status \"25c494662205527a19a3d3ece796e3206ec44e29cb4d2e3d8d3236af0c8245b9\": rpc error: code = NotFound desc = could not find container \"25c494662205527a19a3d3ece796e3206ec44e29cb4d2e3d8d3236af0c8245b9\": container with ID starting with 25c494662205527a19a3d3ece796e3206ec44e29cb4d2e3d8d3236af0c8245b9 not found: ID does not exist" Dec 05 11:25:09 crc kubenswrapper[4796]: I1205 11:25:09.079648 4796 scope.go:117] "RemoveContainer" containerID="f796f16d92ff65f4a9d3b5ed9340dc997b0fa519325b81e923202805268b8952" Dec 05 11:25:09 crc kubenswrapper[4796]: E1205 11:25:09.080057 4796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f796f16d92ff65f4a9d3b5ed9340dc997b0fa519325b81e923202805268b8952\": container with ID starting with f796f16d92ff65f4a9d3b5ed9340dc997b0fa519325b81e923202805268b8952 not found: ID does not exist" containerID="f796f16d92ff65f4a9d3b5ed9340dc997b0fa519325b81e923202805268b8952" Dec 05 11:25:09 crc kubenswrapper[4796]: I1205 11:25:09.080082 4796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f796f16d92ff65f4a9d3b5ed9340dc997b0fa519325b81e923202805268b8952"} err="failed to get container status \"f796f16d92ff65f4a9d3b5ed9340dc997b0fa519325b81e923202805268b8952\": rpc error: code = NotFound desc = could not find container \"f796f16d92ff65f4a9d3b5ed9340dc997b0fa519325b81e923202805268b8952\": container with ID starting with f796f16d92ff65f4a9d3b5ed9340dc997b0fa519325b81e923202805268b8952 not found: ID does not exist" Dec 05 11:25:09 crc kubenswrapper[4796]: I1205 11:25:09.406346 4796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82534c3e-575a-4453-8939-34872d8606f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82534c3e-575a-4453-8939-34872d8606f4" (UID: "82534c3e-575a-4453-8939-34872d8606f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 05 11:25:09 crc kubenswrapper[4796]: I1205 11:25:09.406859 4796 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82534c3e-575a-4453-8939-34872d8606f4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 05 11:25:09 crc kubenswrapper[4796]: I1205 11:25:09.642038 4796 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gcz6t"] Dec 05 11:25:09 crc kubenswrapper[4796]: I1205 11:25:09.650418 4796 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gcz6t"] Dec 05 11:25:10 crc kubenswrapper[4796]: I1205 11:25:10.041586 4796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82534c3e-575a-4453-8939-34872d8606f4" path="/var/lib/kubelet/pods/82534c3e-575a-4453-8939-34872d8606f4/volumes" Dec 05 11:25:36 crc kubenswrapper[4796]: I1205 11:25:36.162329 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-799657985-knzrm" podUID="9e4aeaf3-d2d1-43ab-8594-d293d8602be5" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 05 11:26:05 crc kubenswrapper[4796]: I1205 11:26:05.177243 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 11:26:05 crc kubenswrapper[4796]: I1205 11:26:05.177867 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 11:26:35 crc kubenswrapper[4796]: I1205 11:26:35.177494 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 11:26:35 crc kubenswrapper[4796]: I1205 11:26:35.178930 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 11:27:05 crc kubenswrapper[4796]: I1205 11:27:05.176883 4796 patch_prober.go:28] interesting pod/machine-config-daemon-9pllw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 05 11:27:05 crc kubenswrapper[4796]: I1205 11:27:05.177497 4796 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 05 11:27:05 crc kubenswrapper[4796]: I1205 11:27:05.177539 4796 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" Dec 05 11:27:05 crc kubenswrapper[4796]: I1205 11:27:05.178178 4796 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e35c9f3deeee76a1131b02454ed30af40fac498b96ce3d37697e8ff5a4aca12a"} pod="openshift-machine-config-operator/machine-config-daemon-9pllw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 05 11:27:05 crc kubenswrapper[4796]: I1205 11:27:05.178225 4796 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerName="machine-config-daemon" containerID="cri-o://e35c9f3deeee76a1131b02454ed30af40fac498b96ce3d37697e8ff5a4aca12a" gracePeriod=600 Dec 05 11:27:05 crc kubenswrapper[4796]: E1205 11:27:05.295643 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:27:06 crc kubenswrapper[4796]: I1205 11:27:06.010033 4796 generic.go:334] "Generic (PLEG): container finished" podID="7796bae1-68a7-44b4-98cc-0dd83da754bc" containerID="e35c9f3deeee76a1131b02454ed30af40fac498b96ce3d37697e8ff5a4aca12a" exitCode=0 Dec 05 11:27:06 crc kubenswrapper[4796]: I1205 11:27:06.010108 4796 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" event={"ID":"7796bae1-68a7-44b4-98cc-0dd83da754bc","Type":"ContainerDied","Data":"e35c9f3deeee76a1131b02454ed30af40fac498b96ce3d37697e8ff5a4aca12a"} Dec 05 11:27:06 crc kubenswrapper[4796]: I1205 11:27:06.010177 4796 scope.go:117] "RemoveContainer" containerID="33bec1cb577def4fbb5cf2e2c123c52030235cb0ae4e0e573704cf5ac833978e" Dec 05 11:27:06 crc kubenswrapper[4796]: I1205 11:27:06.010616 4796 scope.go:117] "RemoveContainer" containerID="e35c9f3deeee76a1131b02454ed30af40fac498b96ce3d37697e8ff5a4aca12a" Dec 05 11:27:06 crc kubenswrapper[4796]: E1205 11:27:06.011342 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:27:17 crc kubenswrapper[4796]: I1205 11:27:17.031043 4796 scope.go:117] "RemoveContainer" containerID="e35c9f3deeee76a1131b02454ed30af40fac498b96ce3d37697e8ff5a4aca12a" Dec 05 11:27:17 crc kubenswrapper[4796]: E1205 11:27:17.032140 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:27:28 crc kubenswrapper[4796]: I1205 11:27:28.032016 4796 scope.go:117] "RemoveContainer" containerID="e35c9f3deeee76a1131b02454ed30af40fac498b96ce3d37697e8ff5a4aca12a" Dec 05 11:27:28 crc kubenswrapper[4796]: E1205 11:27:28.032955 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:27:41 crc kubenswrapper[4796]: I1205 11:27:41.032103 4796 scope.go:117] "RemoveContainer" containerID="e35c9f3deeee76a1131b02454ed30af40fac498b96ce3d37697e8ff5a4aca12a" Dec 05 11:27:41 crc kubenswrapper[4796]: E1205 11:27:41.032939 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:27:49 crc kubenswrapper[4796]: I1205 11:27:49.696449 4796 scope.go:117] "RemoveContainer" containerID="e17e25442fe6827023f4fa292a16e3d8fe282f75b90ba183f9d0ec2002526505" Dec 05 11:27:55 crc kubenswrapper[4796]: I1205 11:27:55.031442 4796 scope.go:117] "RemoveContainer" containerID="e35c9f3deeee76a1131b02454ed30af40fac498b96ce3d37697e8ff5a4aca12a" Dec 05 11:27:55 crc kubenswrapper[4796]: E1205 11:27:55.032289 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:28:08 crc kubenswrapper[4796]: I1205 11:28:08.032157 4796 scope.go:117] "RemoveContainer" containerID="e35c9f3deeee76a1131b02454ed30af40fac498b96ce3d37697e8ff5a4aca12a" Dec 05 11:28:08 crc kubenswrapper[4796]: E1205 11:28:08.033286 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc" Dec 05 11:28:22 crc kubenswrapper[4796]: I1205 11:28:22.031179 4796 scope.go:117] "RemoveContainer" containerID="e35c9f3deeee76a1131b02454ed30af40fac498b96ce3d37697e8ff5a4aca12a" Dec 05 11:28:22 crc kubenswrapper[4796]: E1205 11:28:22.032085 4796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9pllw_openshift-machine-config-operator(7796bae1-68a7-44b4-98cc-0dd83da754bc)\"" pod="openshift-machine-config-operator/machine-config-daemon-9pllw" podUID="7796bae1-68a7-44b4-98cc-0dd83da754bc"